gunnarmorling/1brc
1οΈβ£πποΈ The One Billion Row Challenge -- A fun exploration of how quickly 1B rows from a text file can be aggregated with Java
Healthy across all four use cases
weakest axisPermissive license, no critical CVEs, actively maintained β safe to depend on.
Has a license, tests, and CI β clean foundation to fork and modify.
Documented and popular β useful reference codebase to read through.
No critical CVEs, sane security posture β runnable as-is.
- β47+ active contributors
- βDistributed ownership (top contributor 38% of recent commits)
- βApache-2.0 licensed
Show all 6 evidence items βShow less
- βCI configured
- βTests present
- β Stale β last commit 2y ago
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README β live-updates from the latest cached analysis.
[](https://repopilot.app/r/gunnarmorling/1brc)Paste at the top of your README.md β renders inline like a shields.io badge.
βΈPreview social card (1200Γ630)
This card auto-renders when someone shares https://repopilot.app/r/gunnarmorling/1brc on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: gunnarmorling/1brc
Generated by RepoPilot Β· 2026-05-09 Β· Source
π€Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale β STOP and ask the user to regenerate it before proceeding. - Treat the AI Β· unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/gunnarmorling/1brc shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything β but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
π―Verdict
GO β Healthy across all four use cases
- 47+ active contributors
- Distributed ownership (top contributor 38% of recent commits)
- Apache-2.0 licensed
- CI configured
- Tests present
- β Stale β last commit 2y ago
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
β Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live gunnarmorling/1brc
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale β regenerate it at
repopilot.app/r/gunnarmorling/1brc.
What it runs against: a local clone of gunnarmorling/1brc β the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in gunnarmorling/1brc | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit β€ 656 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of gunnarmorling/1brc. If you don't
# have one yet, run these first:
#
# git clone https://github.com/gunnarmorling/1brc.git
# cd 1brc
#
# Then paste this script. Every check is read-only β no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of gunnarmorling/1brc and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "gunnarmorling/1brc(\\.git)?\\b" \\
&& ok "origin remote is gunnarmorling/1brc" \\
|| miss "origin remote is not gunnarmorling/1brc (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift β was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "README.md" \\
&& ok "README.md" \\
|| miss "missing critical file: README.md"
test -f "pom.xml" \\
&& ok "pom.xml" \\
|| miss "missing critical file: pom.xml"
test -f "ENVIRONMENT.md" \\
&& ok "ENVIRONMENT.md" \\
|| miss "missing critical file: ENVIRONMENT.md"
test -f ".github/workflows/maven.yml" \\
&& ok ".github/workflows/maven.yml" \\
|| miss "missing critical file: .github/workflows/maven.yml"
test -f ".github/pull_request_template.md" \\
&& ok ".github/pull_request_template.md" \\
|| miss "missing critical file: .github/pull_request_template.md"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 656 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~626d)"
else
miss "last commit was $days_since_last days ago β artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) β safe to trust"
else
echo "artifact has $fail stale claim(s) β regenerate at https://repopilot.app/r/gunnarmorling/1brc"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
β‘TL;DR
The One Billion Row Challenge (1BRC) is a competitive programming challenge that benchmarks how fast Java implementations can aggregate temperature statistics from a 1-billion-row CSV file containing weather station measurements. Participants optimize their solutions using techniques like parallelization, SIMD, GC tuning, and memory-mapped I/O to compute min/mean/max temperatures per station with minimal latency. Monorepo structure: root contains shared tooling (pom.xml, Maven wrapper in .mvn/), a baseline implementation, and 100+ individual shell scripts (calculate_average_*.sh files) that wrap competing solutions in separate subdirectories. Each submission gets its own executable entry point following the naming convention calculate_average_<ContributorName>.sh. CI runs via GitHub Actions against the maven.yml workflow.
π₯Who it's for
Java performance engineers and competitive programmers who want to push the limits of JVM optimization, learn advanced concurrency patterns (virtual threads, parallel streams), and compete on a standardized benchmark evaluated on identical hardware (Hetzner AX161 8-core setup).
π±Maturity & risk
The challenge was active and closed to submissions on Jan 31, 2024; final leaderboards were published Feb 4, 2024. The repo is now in legacy/archive status as a completed challenge artifact. It contains 100+ battle-tested reference implementations with CI pipelines (Maven, GitHub Actions workflows in .github/workflows/maven.yml) but is not maintained for feature developmentβit's a snapshot of competition results.
Standard open source risks apply.
Active areas of work
The challenge is closed as of Feb 1, 2024. No new submissions are accepted. The repo is in finalization/publication phaseβresults have been published, leaderboards are locked, and the repo now serves as an immutable record of all 100+ submissions with their rankings and performance metrics on standardized hardware.
πGet running
git clone https://github.com/gunnarmorling/1brc.git
cd 1brc
./mvnw clean verify
Then run a specific submission with: bash calculate_average_baseline.sh < measurements.txt (you must first generate or obtain the 1B-row test file per ENVIRONMENT.md).
Daily commands:
Clone, then: ./mvnw clean verify to compile all submissions. Run a single submission: bash calculate_average_baseline.sh < measurements.txt. To regenerate the 1B-row test file, see ENVIRONMENT.md for the data generator command. Each .sh file is a self-contained entry point that invokes the compiled Java code with optimized JVM flags.
πΊοΈMap of the codebase
README.mdβ Defines the One Billion Row Challenge rules, evaluation criteria, and submission guidelines that all contributors must followpom.xmlβ Maven configuration that specifies Java version, compiler settings, and build profiles used across all submissionsENVIRONMENT.mdβ Documents the evaluation environment specifications, JVM flags, and hardware constraints critical for optimizing submissions.github/workflows/maven.ymlβ CI/CD pipeline that validates submissions are buildable and runnable within the challenge constraints.github/pull_request_template.mdβ Submission template that standardizes how contributors structure their PR descriptions and metadataLICENSE.txtβ Apache 2.0 license establishing legal terms for all contributions to the challenge
π§©Components & responsibilities
- Shell Script Wrapper (Bash, JVM flags (-Xmx, -XX:+UseG1GC, etc.)) β Entry point that specifies JVM flags, classpath, memory settings, and invokes the Java main class
- Failure mode: If script has syntax errors or incorrect classpath, submission fails immediately; no output produced
- Java Implementation (CalculateAverage_*) (Java I/O, HashMap/TreeMap, String parsing, optionally Unsafe, Vector API, or memory-mapped files) β Core algorithm that reads measurements.txt line by line, parses city and temperature, and aggregates statistics into min/max/sum/count per city
- Failure mode: Parsing errors, arithmetic overflow, or incorrect aggregation logic produces wrong temperature values; timeout if algorithm is too slow
- Maven Build System (Maven pom.xml, javac, JDK 21+) β Validates syntax, compiles all submissions, runs optional unit tests, and packages bytecode for execution
- Failure mode: Compilation errors prevent submission from reaching evaluation stage; CI pipeline exits with non-zero status
- GitHub Actions CI (GitHub Actions workflow (maven.yml), ubuntu-latest runner) β Automated gating that ensures submissions are buildable before manual review and leaderboard evaluation
- Failure mode: If CI fails, PR is marked as broken and blocked from leaderboard consideration
πData flow
measurements.txt (1B row file)βShell Script Wrapperβ File path is hardcoded or passed as argument to ./calculate_average_Username.shShell Script WrapperβJVM processβ Script spawns JVM with optimized flags and classpath pointing to compiled Java classJVMβJava Implementationβ JVM classloader loads and executes CalculateAverage_* bytecode from classpathJava ImplementationβIn-memory aggregation (HashMap/TreeMap)β Parser reads each line, extracts city and temperature, updates thread-local or shared hash map countersIn-memory aggregationβMerge phaseβ Parallel threads synchronize and combine their per-city statistics (min, max, sum, count)Merge phaseβTreeMapβ Final sorted map keyed by city name ensures deterministic, alphabetically-ordered outputTreeMapβstdout (formatted string)β Java implementation prints result in format: {city1=min;avg;max, city2=...} to consolestdoutβEvaluation harnessβ Script captures output; evaluation machine validates correctness and measures end-to-end runtime (wall clock
π οΈHow to make changes
Submit a New 1BRC Solution
- Create a new Java class in src/main/java/ directory that reads the 1B row CSV file and computes city temperature statistics (min/max/average) (
src/main/java/dev/morling/onebrc/CalculateAverage_YourUsername.java) - Create a shell script wrapper that compiles and runs your Java implementation with appropriate JVM flags (
calculate_average_YourUsername.sh) - Submit a PR with your shell script added to the repository root, following the submission template in .github/pull_request_template.md (
.github/pull_request_template.md) - Ensure your implementation runs in β€10 seconds on the evaluation machine and produces correct temperature averages per city in TreeMap-sorted order (
ENVIRONMENT.md)
Optimize Java Performance for 1BRC
- Review ENVIRONMENT.md to understand available heap size, number of CPU cores, and allowed JVM flags (e.g., -XX:+UseG1GC, -XX:+AlwaysPreTouch) (
ENVIRONMENT.md) - Study the baseline implementation to understand the input format (city;temperature as semicolon-delimited lines) (
calculate_average_baseline.sh) - Implement optimizations like memory-mapped I/O, custom hash maps, parallel streams, or unsafe memory access in your Java class (
src/main/java/dev/morling/onebrc/CalculateAverage_YourUsername.java) - Configure Maven compiler settings in pom.xml for target Java version and compiler flags if needed (
pom.xml)
π§Why these technologies
- Java (JDK 21+) β Challenge requires Java for compute performance; allows access to advanced JVM features like vector API, virtual threads, and unsafe memory operations
- Maven β Standardized build system ensures reproducible compilation and packaging across 100+ diverse implementations
- GitHub Actions β Automated CI validation gates that submissions compile and execute within time constraints before manual review
- Shell scripts β Lightweight execution wrapper that specifies custom JVM flags, heap size, and GC settings optimized per submission
βοΈTrade-offs already made
-
Single machine evaluation vs. distributed cloud testing
- Why: Ensures fair comparison of all submissions under identical hardware/JVM conditions; simplifies leaderboard interpretation
- Consequence: Challenge is compute-bound on one machine; results may not reflect distributed system performance or cloud deployments
-
Strict 10-second runtime limit for new submissions
- Why: Prevents runaway submissions that would block evaluation queue; focuses on practical, optimizable solutions
- Consequence: Favors high-effort algorithmic and JVM-tuning submissions; precludes ultra-brute-force approaches
-
TreeMap-sorted output requirement
- Why: Standardizes output format for easy correctness validation and deterministic comparison across submissions
- Consequence: Adds small overhead to all submissions; rules out unsorted output optimizations
-
No framework/library restrictions
- Why: Encourages creativity and showcases real-world optimization techniques (memory-mapped I/O, unsafe memory, custom collections)
- Consequence: Submissions vary widely in complexity; harder to extract generalizable best practices for typical use cases
π«Non-goals (don't propose these)
- Distributed or cloud-based challenge evaluation
- Real-time streaming aggregation (one-shot 1B row batch processing only)
- Multi-language support (Java only)
- Handling of incremental/append-only data files
- Submission framework or SDK (each contributor writes from scratch)
- Production deployment or fault tolerance guarantees
πͺ€Traps & gotchas
Critical: The 1-billion-row test file (measurements.txt, ~13 GB) is not included in the repoβyou must generate it yourself using the procedure in ENVIRONMENT.md or download it from the challenge infrastructure. Submissions optimize aggressively with JVM flags (e.g., -XX:+UseG1GC, -XX:+UnlockDiagnosticVMOptions, -XX:PrintFlagsFinal) that may not work on all JDK versions or hardware; test with the exact JDK 21 version and CPU architecture mentioned in ENVIRONMENT.md. The shell scripts use bash-specific features and expect a Unix environment (not Windows natively). Output format is strict: {Station1=min/mean/max, Station2=min/mean/max, ...} with stations sorted alphabetically and values rounded to 1 decimal placeβany deviation fails validation.
ποΈArchitecture
π‘Concepts to learn
- Virtual Threads (Project Loom, JEP 444) β Many top-ranked 1BRC submissions use virtual threads (JDK 19+) to parallelize CSV parsing across 1B rows with minimal overhead; understanding this is key to modern Java concurrency.
- Memory-Mapped I/O (java.nio.MappedByteBuffer) β High-performance submissions avoid standard file streams and use
MappedByteBufferto access the 1B-row file as fast off-heap memory, bypassing JVM buffers. - SIMD Vectorization (Vector API, JEP 460) β Fastest submissions leverage Java's Vector API to process CSV data in SIMD-aligned chunks (parsing multiple fields in parallel); critical for reaching sub-minute runtimes.
- G1GC (Garbage-First Garbage Collector) Tuning β Most winning submissions explicitly tune G1GC flags (
-XX:+UseG1GC,-XX:G1HeapRegionSize) to minimize pause time during 1B row aggregation; GC overhead is a bottleneck. - Custom Hash Tables & Collision Resolution β Aggregating by station name requires fast hash lookups; optimized entries implement custom hash tables (instead of
HashMap) with hand-tuned collision chains or linear probing for cache locality. - Unsafe Memory Access (sun.misc.Unsafe) β Some 1BRC submissions use
Unsafeto bypass bounds checks and directly access memory for parsing, trading safety for raw speed on a one-off benchmark. - Cache-Aware Data Layout (False-Sharing Avoidance) β Multi-threaded submissions must avoid false-sharing when aggregating statistics in parallel; padding or
@jdk.internal.vm.annotation.Contendedensures each thread's writes stay in separate CPU cache lines.
πRelated repos
quarkusio/quarkusβ Quarkus is a Java framework used in some 1BRC submissions to achieve fast startup and low memory footprint; relevant for understanding how frameworks optimize for performance.openjdk/jdk21β The official OpenJDK 21 repository; understanding the JVM internals (virtual threads, G1GC, escape analysis) is essential to optimize 1BRC implementations.TechEmpower/FrameworkBenchmarksβ Similar competitive benchmarking framework for web frameworks; shares the same philosophy of standardized hardware evaluation and public leaderboards.jvm-profiling-tools/async-profilerβ Essential profiling tool used by many 1BRC competitors to identify hotspots and measure improvements from optimizations.WireProtocol/mappedbusβ Memory-mapped I/O library that some high-performing 1BRC submissions leveraged to efficiently read the massive CSV file.
πͺPR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Create a submission validation test suite to verify all calculate_average_*.sh scripts conform to the expected interface
With 100+ submission scripts in the repo (calculate_average_*.sh), there's no automated validation that they all accept the correct input format, produce valid output format, or handle edge cases correctly. This would catch broken submissions before they're merged and ensure consistency across all entries. A test harness could verify each script runs successfully, produces JSON output matching the expected schema, and handles the test data correctly.
- [ ] Create src/test/java/dev/morling/SubmissionValidationTest.java to programmatically discover and validate all calculate_average_*.sh scripts
- [ ] Verify each script accepts input file path as first argument
- [ ] Verify output matches the expected JSON format (station names with min/mean/max values)
- [ ] Add Maven failsafe plugin configuration in pom.xml to run these validation tests in verify phase
- [ ] Document in ENVIRONMENT.md the validation requirements for new submissions
Add a benchmark comparison utility to measure relative performance between submissions
The repo has extensive submission scripts but no standardized way for contributors to compare their solution against others or the baseline. This utility would help contributors understand performance characteristics and identify optimization opportunities. A comparison tool would calculate runtime ratios, memory usage, and generate comparison reports.
- [ ] Create src/main/java/dev/morling/demos/BenchmarkComparator.java to run multiple submission scripts and collect metrics
- [ ] Add a Maven profile 'benchmark' in pom.xml to easily run comparisons
- [ ] Create a shell script utility/compare_submissions.sh that accepts submission names and generates a comparison report (runtime, relative speedup vs baseline)
- [ ] Document usage in ENVIRONMENT.md with example: './utility/compare_submissions.sh baseline mysubmission'
Create a submission template generator and checklist to reduce PR review friction
Many submissions are variations on similar approaches but lack consistency in commit messages, script naming, and documentation. Providing a template would standardize submissions and make reviews faster. Currently .github/pull_request_template.md exists but doesn't guide submitters on what to include for a 1BRC submission specifically.
- [ ] Enhance .github/pull_request_template.md with 1BRC-specific sections: optimization techniques used, JVM flags tested, hardware target, expected runtime range
- [ ] Create utility/submission_template.sh that generates a boilerplate calculate_average_USERNAME.sh with proper structure, error handling, and usage comments
- [ ] Create SUBMISSION_GUIDELINES.md documenting: naming conventions, expected performance thresholds (per Jan 12 status: 10sec or less), required comments explaining approach, testing requirements
- [ ] Reference the new guidelines in the PR template
πΏGood first issues
- Add benchmark comparison documentation: Create a markdown file (
BENCHMARK_ANALYSIS.md) that categorizes the 100+ submissions by optimization technique (virtual threads, SIMD, GC tuning, custom hash tables) and analyzes which techniques provided the largest speedups; helps newcomers understand what optimizations matter most. - Write a tutorial for the baseline implementation: Expand
README.mdwith a section that walks throughCalculateAverage.javaline-by-line, explaining the parsing logic, the hash map aggregation, and sortingβnew contributors often struggle to understand how the naive approach works before optimizing. - Create a Docker environment setup script: Add a
docker-setup.shthat builds an image with JDK 21, Maven wrapper, and the data generator pre-configured, eliminating the friction inENVIRONMENT.mdfor Windows/macOS developers who cannot easily replicate the exact Linux benchmark environment.
βTop contributors
Click to expand
Top contributors
- @gunnarmorling β 38 commits
- [@Serkan ΓZAL](https://github.com/Serkan ΓZAL) β 4 commits
- @artsiomkorzun β 3 commits
- @thomaswue β 3 commits
- @ianopolousfast β 3 commits
πRecent commits
Click to expand
Recent commits
db06419β Update README.md (gunnarmorling)3372b6bβ Adding Antonio Goncalves' blog post (agoncal)dfec2cdβ Fixes progress bar forcreate_measurements.py(koutoftimer)6125ba4β Explicit float in commented out code (danielpatrickdotdev)6daa93cβ More accurate file size estimate (danielpatrickdotdev)c923467β Adding blog post (gunnarmorling)fe9b527β Adding a blog post (gunnarmorling)00a309eβ Adding some articles (gunnarmorling)3be4a50β Updating PR template (gunnarmorling)0f28468β Status update (gunnarmorling)
πSecurity observations
The codebase appears to be a legitimate competitive programming challenge repository with generally low security risk. However, the incomplete pom.xml file prevents a thorough dependency vulnerability assessment. The project uses a non-LTS Java version and outdated JUnit, which could be improved. The main security considerations are ensuring all dependencies are properly vetted, keeping Java and testing frameworks updated, and maintaining code review processes for the numerous shell script submissions. No hardcoded credentials, injection vulnerabilities, or infrastructure misconfigurations were detected based on the available information.
- High Β· Incomplete Maven pom.xml - Missing Dependency Information β
pom.xml. The pom.xml file provided is truncated and incomplete. The dependencies section cuts off mid-declaration, making it impossible to fully assess dependency vulnerabilities. This incomplete configuration suggests the file may not be properly maintained or could hide security issues. Fix: Ensure the pom.xml file is complete and valid. Run 'mvn dependency:check' and use 'mvn dependency:tree' to identify all dependencies and check them against known vulnerability databases (OWASP Dependency-Check, Snyk, etc.). - Medium Β· Java 21 Target - Limited Security Patch Window β
pom.xml - maven.compiler.release property. The project targets Java 21, which is a non-LTS (Long Term Support) release. Non-LTS Java versions have shorter support windows and may not receive security updates for as long as LTS versions (Java 17, 21 LTS will be 21.0.x only). This could lead to delayed security patch availability. Fix: Consider using Java 21 LTS once available, or stick with Java 17 LTS for production environments that require extended security support. - Low Β· Outdated JUnit Version β
pom.xml - junit-bom version 5.8.1. The pom.xml specifies JUnit 5.8.1, which is relatively old. While JUnit is primarily a testing framework and not a runtime dependency, using outdated versions may miss bug fixes and improvements. Fix: Update to the latest stable JUnit 5 version (5.9.x or later) to ensure security patches and bug fixes are included. - Low Β· No Explicit Security Dependencies β
pom.xml. The provided pom.xml excerpt shows no explicit security-related dependencies (e.g., no encryption libraries, security frameworks, or input validation libraries). While this project appears to be a performance challenge and may not require them, it's worth verifying. Fix: Verify that all necessary security dependencies are declared in the complete pom.xml. If handling sensitive operations, ensure appropriate libraries are included. - Low Β· Large Number of Unversioned Shell Scripts β
calculate_average_*.sh files throughout repository. The repository contains 100+ shell scripts (calculate_average_*.sh) without visible version control or checksums. These scripts could be vectors for code injection or malicious modifications if not properly reviewed. Fix: Implement code review processes for all shell scripts. Use signed commits and branch protection rules. Consider adding checksums or signatures to verify script integrity. Document the purpose of each script.
LLM-derived; treat as a starting point, not a security audit.
πWhere to read next
- Open issues β current backlog
- Recent PRs β what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals β see the live page for receipts. Re-run on a new commit to refresh.