buger/jsonparser
One of the fastest alternative JSON parser for Go that does not require schema
Healthy across the board
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 4d ago
- ✓20 active contributors
- ✓MIT licensed
Show all 6 evidence items →Show less
- ✓CI configured
- ✓Tests present
- ⚠Concentrated ownership — top contributor handles 54% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/buger/jsonparser)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/buger/jsonparser on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: buger/jsonparser
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/buger/jsonparser shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit 4d ago
- 20 active contributors
- MIT licensed
- CI configured
- Tests present
- ⚠ Concentrated ownership — top contributor handles 54% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live buger/jsonparser
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/buger/jsonparser.
What it runs against: a local clone of buger/jsonparser — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in buger/jsonparser | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 34 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of buger/jsonparser. If you don't
# have one yet, run these first:
#
# git clone https://github.com/buger/jsonparser.git
# cd jsonparser
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of buger/jsonparser and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "buger/jsonparser(\\.git)?\\b" \\
&& ok "origin remote is buger/jsonparser" \\
|| miss "origin remote is not buger/jsonparser (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f "parser.go" \\
&& ok "parser.go" \\
|| miss "missing critical file: parser.go"
test -f "bytes.go" \\
&& ok "bytes.go" \\
|| miss "missing critical file: bytes.go"
test -f "escape.go" \\
&& ok "escape.go" \\
|| miss "missing critical file: escape.go"
test -f "bytes_unsafe.go" \\
&& ok "bytes_unsafe.go" \\
|| miss "missing critical file: bytes_unsafe.go"
test -f "fuzz.go" \\
&& ok "fuzz.go" \\
|| miss "missing critical file: fuzz.go"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 34 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~4d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/buger/jsonparser"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
jsonparser is a high-performance JSON parser for Go that parses JSON without requiring schema definitions or struct mappings. It provides path-based field access (e.g., Get(data, "person", "name", "fullName")) and is up to 10x faster than Go's standard encoding/json library while allocating zero heap memory, making it ideal for unpredictable third-party API responses. Flat monolithic design: core parser logic in parser.go with specialized byte handling in bytes.go, bytes_safe.go, and bytes_unsafe.go (unsafe optimizations for performance). Comprehensive test suite scattered as _test.go files alongside implementation. Formal specification requirements in specs/system and specs/stakeholder (YAML-based traceability). Isolated benchmark suite in benchmark/ with its own go.mod to avoid polluting main dependencies.
👥Who it's for
Go backend developers and API integrators who consume third-party JSON APIs with unpredictable or deeply nested structures and cannot afford the overhead of unmarshaling into structs or map[string]interface{} traversal. Particularly useful for those building data pipelines, API gateways, or services that need both schema-agnostic parsing and extreme performance.
🌱Maturity & risk
Production-ready and actively maintained. The codebase shows strong maturity indicators: comprehensive test coverage (parser_test.go, bytes_test.go, spec-driven tests under specs/), formal requirements traceability (specs/stakeholder and specs/system), CI/CD pipelines (GitHub Actions for cifuzz and reqproof), and security-conscious fuzzing setup (fuzz.go, oss-fuzz-build.sh). The Go Report Card badge and benchmarking suite suggest stable API and ongoing quality focus.
Low risk for core parsing, but several maintenance concerns: single visible maintainer (buger) with no evidence of active co-maintainers in the file list, last commit recency unknown from provided data, and the benchmark suite dependencies (benchmark/go.mod lists ffjson, easyjson, etc.) may drift over time. No breaking-change policy documented. Dependency surface is intentionally minimal (stdlib-only for core), reducing supply-chain risk.
Active areas of work
Active security and compliance focus: recent work includes cifuzz fuzzing pipeline (.github/workflows/cifuzz.yml), proof of requirements (reqproof.yml workflow, proof.yaml), MCDC test supplement coverage (mcdc_supplement_test.go), and dead code audits (dead_code_audit_test.go). Indicates shift toward formal verification and safety assurance rather than feature velocity.
🚀Get running
git clone https://github.com/buger/jsonparser.git
cd jsonparser
go test ./...
go run benchmark/benchmark.go
No external dependencies to install—uses stdlib only. Tests run via go test directly.
Daily commands:
Run tests: go test ./... or go test -fuzz=FuzzParser -run=FuzzParser for continuous fuzzing. Benchmark: cd benchmark && go test -bench=. -benchmem. Parse JSON in code: import "github.com/buger/jsonparser" then call jsonparser.Get(data, "path", "to", "field").
🗺️Map of the codebase
parser.go— Core parsing engine implementing the schema-less JSON parser; contains the primary tokenization and value extraction logic that drives all library functionality.bytes.go— Byte manipulation utilities for efficient parsing; handles encoding/decoding operations critical to the parser's performance claims.escape.go— String escape sequence handling; essential for correctly parsing JSON strings containing escape sequences without allocating extra memory.bytes_unsafe.go— Performance-optimized unsafe byte operations; implements the zero-allocation strategy that differentiates this library from standard JSON packages.fuzz.go— Fuzzing entry point for security testing; demonstrates how the parser handles malformed and adversarial input reliably.parser_test.go— Comprehensive test suite validating parser correctness across edge cases, deeply nested structures, and protocol compliance.benchmark/benchmark.go— Benchmark harness comparing jsonparser against competing JSON libraries (easyjson, ffjson, gjson); validates performance claims.
🧩Components & responsibilities
- parser.go (Core Parser) (Go, unsafe pointers, state machine tokenization) — Implements schema-less JSON navigation via path-based field access (GetString, GetInt, GetFloat, GetArray, GetObject, Delete, Set)
- Failure mode: Malformed JSON causes panics or incorrect values; missing fields return empty results; type mismatches are caller's responsibility
- bytes.go / bytes_unsafe.go (Byte — undefined
🛠️How to make changes
Add Support for New JSON Type or Operation
- Implement the core parsing/extraction logic in parser.go following existing Get* function patterns (
parser.go) - Add low-level byte utilities if needed for the new operation in bytes.go and bytes_unsafe.go (
bytes.go) - Create comprehensive unit tests validating edge cases and correctness (
parser_test.go) - Add performance benchmarks comparing against competing libraries (
benchmark/benchmark.go) - Document the requirement in specs/system/requirements and update proof.yaml for traceability (
proof.yaml)
Optimize Parser Performance for Specific Use Case
- Create new benchmark test file in benchmark/ directory for your specific payload size/structure (
benchmark/benchmark_custom_payload_test.go) - Profile existing implementation using the benchmark and identify hot paths (
parser.go) - Implement micro-optimizations using unsafe operations in bytes_unsafe.go, ensuring safe fallback in bytes_safe.go (
bytes_unsafe.go) - Validate improvements don't break existing tests and update performance requirements (
parser_test.go)
Add Fuzzing for New Code Path
- Define fuzzing targets in fuzz.go following LibFuzzer conventions (
fuzz.go) - Ensure CI pipeline includes fuzzing via .github/workflows/cifuzz.yml (
.github/workflows/cifuzz.yml) - Add corpus examples and regression tests based on fuzzing findings (
parser_test.go)
🔧Why these technologies
- Go (unsafe pointers) — Enables zero-allocation parsing via direct memory access without copying; critical for 10x performance claim over encoding/json
- LibFuzzer (via fuzz.go) — Continuous security testing catches parser edge cases and malformed input handling before they reach production
- YAML-based specification tracking (specs/) — Formal requirement traceability and proof system ensures correctness claims are auditable and testable
- Benchmark suite with multiple competitors — Validates and demonstrates the claimed performance advantage over easyjson, ffjson, ujson, and standard library
⚖️Trade-offs already made
-
Use unsafe pointers for core byte operations
- Why: Maximum performance and zero allocation
- Consequence: Platform-specific code; requires safe fallback (bytes_safe.go); potential pointer escape risks if misused
-
No schema required; path-based field access
- Why: Simplicity for unpredictable 3rd-party API responses; avoids struct definition overhead
- Consequence: Type conversions deferred to caller; no compile-time schema validation; runtime type checking needed
-
In-place string unescaping without copying
- Why: Maintains zero-allocation promise
- Consequence: Unescaped strings point into original JSON buffer; caller must not mutate original while using result
-
Single-pass tokenization without tree construction
- Why: Avoids intermediate allocations and reduces memory footprint
- Consequence: Cannot reuse parsed structure for multiple queries; must re-scan for each field access
🚫Non-goals (don't propose these)
- Does not require pre-defined data structures or code generation
- Not a full JSON unmarshaler to Go types (intentionally minimal)
- Does not stream process unbounded JSON (loads full payload into memory)
- Does not provide schema validation or JSON Schema support
- Does not support JSONPath or XPath-like query languages beyond simple key paths
🪤Traps & gotchas
Unsafe memory access: bytes_unsafe.go uses unsafe.Pointer and direct memory access—breaks on platforms that disable unsafe (rare, but possible in restricted environments). Escape sequence handling: escape.go's Unicode handling is non-trivial; invalid UTF-8 in raw byte slices may not error gracefully. Index syntax quirk: Array index syntax requires string format (e.g., Get(data, "array", "[0]", "field")) not integer—easy to miss. No schema validation: Parser accepts any valid JSON; caller must validate field types exist before calling GetInt/GetString (see var size int64; if value, err := jsonparser.GetInt(...)). Benchmark isolation: benchmark/go.mod is separate and can drift from main module's Go version; pin both.
🏗️Architecture
💡Concepts to learn
- Unsafe pointer arithmetic — bytes_unsafe.go relies on this to achieve zero-copy slicing; understanding when unsafe is safe vs. risky is critical to contributing performance improvements without memory safety bugs
- Recursive descent parsing — Core algorithm in parser.go; essential to understand how the parser walks JSON structure without regex or tokenization, and how path resolution works at each recursion level
- Zero-copy I/O — The fundamental performance trick: jsonparser returns byte slices pointing into the original input buffer rather than allocating new strings; understanding slice aliases and when copying occurs is key to the speed claim
- Modified Condition/Decision Coverage (MCDC) — mcdc_supplement_test.go applies this formal test metric; understanding MCDC is essential to adding new code paths without introducing hidden logic bugs
- Fuzzing — fuzz.go and the OSS-Fuzz integration (oss-fuzz-build.sh) test parser robustness against malformed/adversarial JSON; understanding fuzzing harness design helps improve security tests
- Unicode escape sequences in JSON — escape.go handles JSON's \uXXXX and surrogate pair syntax; subtle bugs here silently corrupt non-ASCII text, making it a common source of hard-to-debug issues
- Index-based array access syntax — jsonparser uses string-based index syntax ("[0]") rather than integer arguments; this design choice avoids varargs type ambiguity but is non-obvious and must be documented for users
🔗Related repos
mailru/easyjson— Code-generation-based JSON parser for Go; faster for known schemas but requires code generation, whereas jsonparser is schema-free and runtime-evaluatedpquerna/ffjson— Another code-generation JSON tool; included in jsonparser's own benchmarks (benchmark/benchmark_ffjson.go) as a reference competitortidwall/gjson— Modern schema-free JSON path querying library (Go 1.10+); similar use case to jsonparser with different API (dot notation vs positional args)Jeffail/gabs— Query and modify JSON in Go without schemas; used in jsonparser benchmarks as a comparison point, shows what jsonparser optimizes againstgolang/go— Standard library encoding/json package; the baseline performance target and the reason jsonparser exists (it claims 10x speedup vs stdlib)
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive benchmarks for unsafe vs safe byte operations
The repo has both bytes_unsafe.go and bytes_safe.go implementations with corresponding test files (bytes_unsafe_test.go, bytes_test.go), but there's no dedicated benchmark comparing the performance trade-offs between these two approaches. Since the library advertises being '10x faster' and the README emphasizes performance, adding detailed benchmarks in benchmark/ would help users understand when to use which implementation and validate performance claims across different payload types.
- [ ] Create benchmark/benchmark_unsafe_vs_safe_test.go comparing bytes_unsafe.go vs bytes_safe.go operations
- [ ] Add benchmarks for common operations: GetString, GetInt, GetFloat, GetBoolean across both implementations
- [ ] Test with small, medium, and large payloads to show performance differences at scale
- [ ] Document findings in README.md with recommendations for when each implementation should be preferred
Add integration tests for the fuzz.go fuzzing harness with CI validation
The repo has fuzz.go and oss-fuzz-build.sh for fuzzing support, but there's no automated CI workflow validating that the fuzzing harness compiles and runs correctly. Given that coverage tests (coverage_closure_test.go, dead_code_audit_test.go) exist and there's a cifuzz.yml workflow file, adding a test to validate the fuzzing build process in CI would catch regressions early and ensure security quality.
- [ ] Create fuzz_integration_test.go to test that fuzz.go exports valid fuzzing functions
- [ ] Update .github/workflows/cifuzz.yml to run 'go test -fuzz' for a short duration on each PR
- [ ] Verify oss-fuzz-build.sh executes successfully in CI without manual intervention
- [ ] Document fuzzing setup in README.md for new contributors interested in security testing
Implement missing test coverage for error handling across specification requirements
The repo has formal requirement specs (specs/system/requirements/SYS-REQ-.yaml, specs/stakeholder/requirements/STK-REQ-.yaml) and error-focused tests (parser_error_test.go), but test files like parser_error_test.go lack cross-referencing with specific requirement IDs. Creating a test matrix that validates each SYS-REQ and STK-REQ against corresponding test cases would improve traceability and ensure all formal requirements have passing tests.
- [ ] Review all files in specs/system/requirements/ and specs/stakeholder/requirements/ to extract requirement IDs and descriptions
- [ ] Create parser_requirements_test.go with test cases explicitly mapped to each SYS-REQ and STK-REQ using test subtests
- [ ] Ensure parser_error_test.go and existing tests are cross-referenced with requirement IDs in comments
- [ ] Add a test matrix table in proof.yaml (or create .md documentation) showing requirement → test file mapping
🌿Good first issues
- Add error type documentation: parser_error_test.go exists but no exported error types are documented in README or godoc. Add error constant definitions to parser.go (e.g., ErrKeyNotFound, ErrTypeMismatch) with exported types and update README.md with error handling example.
- Expand escape_test.go coverage: Currently only escape.go exists with limited visible test cases. Add comprehensive tests for edge cases: surrogate pairs, invalid UTF-8 sequences, mixed ASCII/Unicode strings. Match coverage to mcdc_supplement_test.go rigor.
- Create integration example for ArrayEach + GetString chain: README shows ArrayEach example extracting URL but doesn't show a production pattern combining ArrayEach callback with nested GetString calls. Add benchmark/example_nested_arrays_test.go demonstrating filtering and type-safe extraction.
⭐Top contributors
Click to expand
Top contributors
- @buger — 54 commits
- @AllenX2018 — 19 commits
- @AdamKorcz — 7 commits
- @daria-kay — 2 commits
- @d-hat — 2 commits
📝Recent commits
Click to expand
Recent commits
c172c16— Merge pull request #269 from buger/tinygo (buger)680cd2e— Merge pull request #281 from buger/reqproof-assurance-hardening (buger)9dce61c— Migrate review storage from reviews/ folder to per-requirement timestamps (buger)c03b9ef— feat: add property-based obligation classes with 24 new SYS-REQs (buger)9c46110— chore: fix spec lint warnings — remove stale parent field, set review metadata (buger)8bbb8a8— Close coverage gaps: SYS-REQ-007/008/010 fuzz harness coverage to 100% (buger)552e93b— Install Z3 via apt before audit (buger)98133b4— Remove manual Z3 pre-download, now handled by proof-action (buger)1b70ead— Debug Z3 pre-download: remove output suppression (buger)aac1fbc— Pre-download Z3 solver before audit (buger)
🔒Security observations
The jsonparser project has significant security concerns primarily related to infrastructure and dependency management. The Dockerfile uses Go 1.6 (7+ years outdated) and fetches outdated dependencies from 2017-2019, creating substantial vulnerability exposure. While the codebase itself includes good practices (fuzzing, unsafe code auditing, comprehensive specs), the build and runtime environment is severely compromised. Critical actions: (1) Update Go to 1.21+, (2) Update all dependencies and establish regular update cadence, (3) Implement go.sum verification in build process, (4) Audit unsafe memory operations. The presence of fuzz testing and comprehensive specifications is positive but overshadowed by dependency and infrastructure risks.
- High · Outdated Go Base Image —
Dockerfile. Dockerfile uses Go 1.6, which is extremely outdated (released March 2016) and no longer receives security updates. This exposes the build environment and any compiled binaries to known vulnerabilities in the Go runtime. Fix: Update to a current, actively maintained Go version (1.21+). Use 'FROM golang:1.21-alpine' or later stable release. - High · Vulnerable Dependency Versions —
benchmark/go.mod and Dockerfile dependency fetch commands. Multiple dependencies use pinned versions from 2017-2019 with known vulnerabilities: github.com/buger/jsonparser (2018), mailru/easyjson (2019), pquerna/ffjson (2018), and ugorji/go (2019). These packages have not been updated in years and likely contain security flaws. Fix: Update all dependencies to latest stable versions. Run 'go get -u ./...' and review security advisories via 'go list -json -m all' or security scanners like nancy/trivy. - High · Missing Go Module Verification —
Dockerfile RUN commands. The Dockerfile uses 'go get' without module verification flags. Without go.sum verification or GOSUMDB validation, there is risk of dependency tampering or man-in-the-middle attacks. Fix: Ensure go.mod and go.sum files are committed and verified. Use '-mod=readonly' flag or modern Go with proper module pinning. Consider using 'COPY go.* .' before 'RUN go mod download'. - Medium · Unsafe Memory Operations —
bytes_unsafe.go, bytes_unsafe_test.go. The codebase includes 'bytes_unsafe.go' and uses unsafe pointer operations (indicated by the fuzz build and unsafe tag usage). While performance-oriented, unsafe code requires careful review for buffer overflows and memory corruption vulnerabilities. Fix: Conduct thorough security audit of unsafe code blocks. Ensure all unsafe operations have bounds checking. Consider using safe alternatives where possible or adding defensive checks. - Medium · Fuzz Testing Present but Coverage Unclear —
fuzz.go, .github/workflows/cifuzz.yml, oss-fuzz-build.sh. While fuzz.go and cifuzz workflow exist (good practice), the effectiveness depends on ongoing maintenance. OSS-Fuzz integration is present but unclear if actively maintained or if findings are addressed. Fix: Ensure OSS-Fuzz builds are regularly executed and any findings are promptly addressed. Document fuzzing status and maintain active integration with security testing pipelines. - Low · No SECURITY.md Policy —
Repository root. No visible security policy file (SECURITY.md) for vulnerability disclosure, making it difficult for security researchers to report issues responsibly. Fix: Create a SECURITY.md file with vulnerability disclosure guidelines and contact information. - Low · Limited Security Test Coverage Visibility —
Test files structure. While dead code audits and MCDC tests exist, security-specific test files are minimal. No visible tests for malformed input handling, overflow conditions, or regex/denial-of-service scenarios. Fix: Expand test suite to include security-focused scenarios: malformed JSON, extremely nested structures, large payloads, and adversarial inputs.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.