RepoPilotOpen in app →

hatoo/oha

Ohayou(おはよう), HTTP load generator, inspired by rakyll/hey with tui animation.

Healthy

Healthy across all four use cases

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 2d ago
  • 8 active contributors
  • MIT licensed
Show all 6 evidence items →
  • CI configured
  • Tests present
  • Single-maintainer risk — top contributor 90% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/hatoo/oha)](https://repopilot.app/r/hatoo/oha)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/hatoo/oha on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: hatoo/oha

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/hatoo/oha shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 2d ago
  • 8 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Single-maintainer risk — top contributor 90% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live hatoo/oha repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/hatoo/oha.

What it runs against: a local clone of hatoo/oha — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in hatoo/oha | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 32 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>hatoo/oha</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of hatoo/oha. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/hatoo/oha.git
#   cd oha
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of hatoo/oha and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "hatoo/oha(\\.git)?\\b" \\
  && ok "origin remote is hatoo/oha" \\
  || miss "origin remote is not hatoo/oha (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "src/main.rs" \\
  && ok "src/main.rs" \\
  || miss "missing critical file: src/main.rs"
test -f "src/client.rs" \\
  && ok "src/client.rs" \\
  || miss "missing critical file: src/client.rs"
test -f "src/cli.rs" \\
  && ok "src/cli.rs" \\
  || miss "missing critical file: src/cli.rs"
test -f "src/monitor.rs" \\
  && ok "src/monitor.rs" \\
  || miss "missing critical file: src/monitor.rs"
test -f "src/result_data.rs" \\
  && ok "src/result_data.rs" \\
  || miss "missing critical file: src/result_data.rs"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 32 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~2d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/hatoo/oha"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

oha is a high-performance HTTP load generator written in Rust that sends configurable request loads to web servers and displays real-time statistics via a terminal UI. It replaces rakyll/hey with a modern async architecture (tokio), supports HTTP/1.1, HTTP/2, and experimental HTTP/3 (via h3-quinn), and provides live TUI animations powered by ratatui showing throughput, latency percentiles, and status distributions. Monolithic binary crate: src/main.rs is the CLI entry point, src/client.rs and src/client_h3.rs handle HTTP transports, src/monitor.rs orchestrates the real-time TUI loop via ratatui, and src/result_data.rs + src/histogram.rs accumulate and analyze statistics. A separate pgo/server/ subproject provides a benchmark target for Profile-Guided Optimization builds.

👥Who it's for

DevOps engineers, performance testers, and backend developers who need to benchmark HTTP services and need an alternative to Apache Bench or wrk that provides real-time visual feedback without leaving the terminal. Users running load tests in CI/CD pipelines or containers benefit from its minimal resource footprint and lack of GUI dependencies.

🌱Maturity & risk

Production-ready and actively maintained. The project is at v1.14.0 with established CI workflows (GitHub Actions for CI, release, and PGO builds), cross-platform release binaries, and packaging in Arch Linux, Homebrew, and Debian. The codebase shows consistent updates but with a reasonable, non-frenetic commit cadence suggesting stability over rapid breaking changes.

Low risk for core load-testing use cases. The dependency surface is moderate (tokio, hyper, ratatui are stable Rust ecosystem staples), but the HTTP/3 support via h3 is marked experimental and may introduce breakage. Single-maintainer risk exists (hatoo), though the tool is simple enough that forking is straightforward. No massive open issue backlog visible in the provided data.

Active areas of work

Active maintenance on performance optimization (PGO builds are a dedicated workflow), HTTP/3 experimental support refinement, and dependency updates (e.g., tokio 1.38.1, ratatui 0.30.0). The Changelog and release workflows indicate regular version bumps; the project is responsive to ecosystem changes rather than adding major features.

🚀Get running

Clone and build with Cargo: git clone https://github.com/hatoo/oha.git && cd oha && cargo build --release. For a quick test: cargo run -- https://example.com. Requires Rust 1.88+ (per rust-version in Cargo.toml), make, and cmake for OpenSSL/TLS support.

Daily commands: Local smoke test: cargo run -- -n 100 -c 10 https://httpbin.org/delay/0.1 sends 100 requests with 10 concurrent workers. For containerized: docker build -t oha . && docker run --rm oha https://example.com. PGO-optimized builds require pgo.js and the profiling server in pgo/server/.

🗺️Map of the codebase

  • src/main.rs — Entry point that orchestrates CLI parsing, client initialization, and result aggregation—every load test flow starts here.
  • src/client.rs — Core HTTP/1.1 request engine using tokio and hyper; handles connection pooling, request execution, and timing measurement.
  • src/cli.rs — Command-line argument parsing and configuration structure; defines all user-facing options and defaults.
  • src/monitor.rs — Real-time TUI animation and live statistics display using ratatui; critical for the tool's unique interactive experience.
  • src/result_data.rs — Statistical aggregation and result storage; computes latency percentiles, histograms, and final report data.
  • Cargo.toml — Dependency manifest with feature flags (rustls vs native-tls); essential for understanding platform support and build options.
  • src/client_h3.rs — HTTP/3 (QUIC) client implementation; alternative protocol handler extending oha's protocol support.

🛠️How to make changes

Add a new CLI flag or option

  1. Add the field to the Args struct in src/cli.rs with appropriate clap attributes (src/cli.rs)
  2. Pass the new option through main.rs to the client/monitor initialization (src/main.rs)
  3. Use the option in client.rs or request_generator.rs where requests are built (src/client.rs)

Add a new HTTP protocol (e.g., HTTP/2)

  1. Create src/client_h2.rs following the pattern of src/client_h3.rs with a similar public interface (src/client_h2.rs)
  2. Add protocol detection logic in main.rs to instantiate the correct client type (src/main.rs)
  3. Declare the module in src/lib.rs and export public types (src/lib.rs)
  4. Add feature flag in Cargo.toml if dependencies are protocol-specific (Cargo.toml)

Add a new statistic or metric to results

  1. Define the new field in the result data structure in src/result_data.rs (src/result_data.rs)
  2. Compute the metric during aggregation (percentile, histogram, etc.) in src/result_data.rs or src/histogram.rs (src/histogram.rs)
  3. Add the metric to the final report output in src/printer.rs (src/printer.rs)
  4. Optionally display it in the live TUI in src/monitor.rs if it updates in real time (src/monitor.rs)

Add a new authentication method (e.g., OAuth2)

  1. Create src/oauth2_auth.rs with token request and refresh logic (src/oauth2_auth.rs)
  2. Add authentication option to Args struct in src/cli.rs (src/cli.rs)
  3. Integrate auth header injection in src/request_generator.rs before building requests (src/request_generator.rs)
  4. Export the module in src/lib.rs (src/lib.rs)

🔧Why these technologies

  • Tokio (async runtime) — Enables thousands of concurrent requests with minimal overhead; essential for high-throughput load testing.
  • Hyper (HTTP/1.1 client) — Mature, fast HTTP client with built-in connection pooling and streaming support.
  • Ratatui (TUI framework) — Provides a smooth, real-time animated terminal UI for live progress visualization—core differentiation vs. hey.
  • Rustls / native-tls — Dual TLS backend support allows users to choose between pure-Rust (rustls) or system native TLS; improves portability.
  • Quinn (HTTP/3 / QUIC) — Supports modern HTTP/3 protocol; offers lower latency and connection migration benefits.
  • Clap (CLI parsing) — Declarative command-line argument handling with curl compatibility layer; reduces boilerplate.

⚖️Trade-offs already made

  • Single-process tokio runtime vs. multi-process workers

    • Why: Simpler deployment, shared connection pools, easier result aggregation.
    • Consequence: Subject to single-machine limits (CPU cores, file descriptors); distributed load testing requires external orchestration.
  • Real-time TUI rendering every ~50ms

    • Why: Provides engaging interactive feedback during test runs.
    • Consequence: Minimal CPU overhead but blocks final report until test completes; cannot disable TUI without code change (no --quiet flag evident).
  • In-memory result buffering (via result_data.rs / db.rs)

    • Why: Fast aggregation and percentile computation; simple to implement.
    • Consequence: High memory usage for very long tests or enormous request volumes; no streaming or database persistence by default.
  • Dual TLS implementations (rustls + native-tls as features)

    • Why: Rustls is pure-Rust (portable, auditable); native-tls integrates OS trust stores.
    • Consequence: Build complexity; users must understand which to choose; potential inconsistencies in certificate validation.

🚫Non-goals (don't propose these)

  • Distributed load testing across multiple machines (single-process tool).
  • GUI or web-based dashboard (terminal-only TUI).
  • Persistent result storage or historical comparison (in-memory aggregation only).
  • Multi-protocol proxying or request routing (pure HTTP client, not a proxy).
  • Request scripting or complex dynamic payloads (basic template expansion only).
  • Authentication beyond basic/bearer/AWS SigV4 (no OAuth2, SAML, etc. built-in).
  • Windows native binary support (Rust + tokio support is present, but not actively tested in CI).

🪤Traps & gotchas

HTTP/3 feature (http3) is experimental and only works with rustls, not native-tls—attempting both will fail at compile time. The TLS config in src/tls_config.rs uses hardcoded root cert paths that may differ on Windows; test in target environment. Tests use self-signed certs in tests/common/ and assume localhost binding; they may hang or fail if port 8080+ is in use. The pgo.js profiling script requires Node.js and a running pgo/server binary; the PGO workflow won't succeed if the server crashes during profiling. No environment variable overrides for CLI defaults—all config is CLI-only.

🏗️Architecture

💡Concepts to learn

  • Token Bucket Rate Limiting — oha uses rate-limiting logic to spread requests across the -d (duration) window and respect -c (concurrency) limits; understanding token buckets explains why load patterns are smooth rather than bursty.
  • Percentile Latency Histograms — src/histogram.rs computes p50, p90, p99 latencies to identify tail behavior; this is crucial for understanding real-world user experience in production systems.
  • HTTP/2 Multiplexing — oha's HTTP/2 mode in src/client.rs reuses a single connection for many requests, reducing latency and resource usage compared to HTTP/1.1; this is why protocol choice matters in load tests.
  • QUIC and HTTP/3 — The experimental http3 feature uses QUIC (via quinn) for 0-RTT resumption and connection migration; understanding QUIC helps explain why HTTP/3 tests may behave differently than HTTP/2.
  • Profile-Guided Optimization (PGO) — oha's release-pgo.yml workflow uses PGO to optimize hot paths for load-testing workloads; the pgo/ subproject provides realistic profiling data to the Rust compiler.
  • AWS Signature Version 4 — src/aws_auth.rs implements SigV4 signing for load-testing AWS endpoints; this is a specialized feature for benchmarking AWS APIs with credential-based auth.
  • Async/await and Tokio Runtime — oha's entire concurrency model depends on tokio's work-stealing scheduler and async Rust primitives; understanding async Rust is mandatory for modifying the request loop.
  • rakyll/hey — Direct predecessor and inspiration; oha is a modernized Rust rewrite with async/await and TUI, versus Go's synchronous model.
  • wg/wrk — Alternative load testing tool in Lua; oha competes in the same niche but targets DevOps who prefer Rust tooling and real-time feedback.
  • hyperium/hyper — Underlying HTTP client library; oha depends on hyper 1.4 for HTTP/1.1 and HTTP/2 transport.
  • hyperium/h3 — HTTP/3 library for the experimental http3 feature; community-maintained Rust HTTP/3 stack.
  • ratatui-org/ratatui — TUI rendering framework used for the live dashboard; oha is a showpiece application demonstrating ratatui's real-time charting.

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive HTTP/3 integration tests in tests/tests.rs

The repo has HTTP/3 support via the 'http3' feature flag (h3, h3-quinn, quinn dependencies), but tests/tests.rs likely lacks HTTP/3-specific test cases. Given that HTTP/3 is an optional feature with multiple new dependencies (h3, h3-quinn, quinn-proto, quinn), adding tests that verify HTTP/3 requests work correctly alongside existing HTTP/1 and HTTP/2 tests would catch regressions and validate the feature quality.

  • [ ] Review tests/tests.rs to identify existing HTTP/1 and HTTP/2 test patterns
  • [ ] Add test cases that run with --http3 flag (or equivalent from src/cli.rs)
  • [ ] Set up HTTP/3 test server in tests/common/mod.rs alongside existing server.cert and server.key
  • [ ] Create conditional compilation guards using #[cfg(feature = "http3")] for HTTP/3-specific tests
  • [ ] Verify tests pass with cargo test --features http3

Add unit tests for src/url_generator.rs and src/request_generator.rs

These modules handle critical request generation logic but have no corresponding test coverage visible in tests/. The url_generator likely handles templating/variable substitution, and request_generator handles request creation. Testing these in isolation ensures correctness of load generation patterns, especially for dynamic URL/header generation which could have subtle bugs.

  • [ ] Create tests/unit/url_generator.rs with tests for URL pattern generation and variable substitution
  • [ ] Create tests/unit/request_generator.rs with tests for request building with various header/body combinations
  • [ ] Test edge cases: empty URLs, special characters, header encoding, body serialization
  • [ ] Add module declarations in tests/common/mod.rs or create tests/unit/mod.rs structure
  • [ ] Run cargo test url_generator request_generator to validate coverage

Add CI workflow for testing native-tls feature independently

The repo has two TLS backends (rustls as default, native-tls as optional), but .github/workflows/CI.yml likely only tests the default 'rustls' feature. Without testing native-tls separately, platform-specific native-tls bugs (especially on macOS/Windows) could slip through. This mirrors the existing release-pgo.yml pattern of specialized workflows.

  • [ ] Review .github/workflows/CI.yml to confirm native-tls feature isn't tested
  • [ ] Create .github/workflows/ci-native-tls.yml that runs: cargo test --no-default-features --features native-tls
  • [ ] Include matrix for multiple Rust versions and OS targets (ubuntu-latest, macos-latest, windows-latest)
  • [ ] Ensure test matrix includes both 'native-tls' only and 'native-tls + vsock' feature combinations
  • [ ] Add status badge to README.md linking to the new workflow

🌿Good first issues

  • Add latency percentile output to JSON schema (src/result_data.rs and Cargo.toml schema.json). Currently, histogram percentiles are computed but not serialized to the output JSON, making programmatic parsing incomplete.
  • Expand tests/tests.rs with integration tests for curl compatibility flags (src/curl_compat.rs). Only a few curl equivalents are tested; adding tests for -H (headers), -d (body), -X (method) would catch regressions.
  • Document the Monitor::new() event-driven state machine in src/monitor.rs with diagrams or comments. The current code is dense; a junior would struggle to understand request/response batching during live UI updates.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • d07ff6b — Merge pull request #881 from hatoo/update-deps (hatoo)
  • ef17630 — drop termux resolv.conf handling (hatoo)
  • 8cbe96b — fix termux branch cfg for hickory 0.26 (hatoo)
  • 4685aea — fix? (hatoo)
  • b025833 — clippy fix (hatoo)
  • f22e23e — clippy --fix (hatoo)
  • b1ed927 — bump average (hatoo)
  • e4a6bbb — bump (hatoo)
  • 8289156 — bump ctor (hatoo)
  • ebfde86 — bump jsonschema (hatoo)

🔒Security observations

The oha HTTP load generator codebase shows generally good security practices with use of rustls for TLS, non-root Docker execution, and proper dependency management. However, there are notable issues: the Cargo.toml declares a non-existent Rust edition (2024) which must be corrected, base Docker images lack version pinning creating supply chain risks, and test certificates are committed to the repository. The incomplete dependency specification requires validation. The use of 'tokio full features' increases attack surface unnecessarily. Overall security posture is moderate-to-good, but the edition error and Docker image pinning issues should be addressed before production use.

  • High · Edition 2024 May Not Exist — Cargo.toml. The Cargo.toml specifies edition = '2024', which does not exist in Rust. Valid editions are 2015, 2018, and 2021. This may cause build failures or unexpected behavior. Fix: Update to a valid Rust edition: edition = '2021'
  • Medium · Incomplete Dependency Specification — Cargo.toml. The Cargo.toml file appears truncated with incomplete dependency entries (quinn-proto dependency line ends with 'option' and is cut off). This could indicate corrupted or incomplete package manifest. Fix: Complete and validate all dependency specifications in Cargo.toml. Run 'cargo check' to verify all dependencies are properly specified.
  • Medium · Test Certificates in Repository — tests/common/server.cert, tests/common/server.key. Self-signed test certificates (server.cert, server.key) are committed to the repository. While these are for testing purposes, sensitive key material should never be committed even in test contexts. Fix: Generate test certificates at test runtime or during CI/CD pipeline instead of storing them in version control. Consider using ephemeral certificate generation tools.
  • Medium · Docker Base Image Not Pinned — Dockerfile. The Dockerfile uses 'docker.io/library/rust:${RUST_VERSION} AS chef' and 'registry.fedoraproject.org/fedora-minimal AS runtime' without specific version tags. This creates non-reproducible builds and potential security risks from unpredictable base image updates. Fix: Pin all base images to specific digest hashes or version tags: 'rust:1.88-slim' and 'fedora-minimal:40' (or latest stable version with digest hash).
  • Medium · Running Container as Non-Root Without Explicit UID Mapping — Dockerfile. While the Dockerfile sets USER 65535 (usually nobody/nfsnobody), the explicit UID may not be portable across all systems and could cause permission issues. Fix: Either document the UID requirement or use 'USER nobody' for better compatibility. Ensure all mounted volumes have appropriate permissions.
  • Low · No HEALTHCHECK in Dockerfile — Dockerfile. The Docker image lacks a HEALTHCHECK instruction, making it difficult for orchestrators to determine if the container is healthy. Fix: Add a HEALTHCHECK instruction to enable health monitoring in Kubernetes or Docker Compose deployments.
  • Low · Dependency on Full Tokio Feature Set — Cargo.toml (dependencies section). The dependencies include 'tokio = { version = "1.38.1", features = ["full"] }', which includes all features. This increases attack surface and bundle size unnecessarily. Fix: Replace features = ["full"] with only required tokio features (e.g., features = ["rt", "macros", "net", "io-util"]).
  • Low · No Cargo.lock in .gitignore for Library — .gitignore. While this is an application (binary), the presence of Cargo.lock should be tracked. Verify this is intentional for reproducible builds. Fix: Ensure Cargo.lock is committed to version control for binary projects to enable reproducible builds.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · hatoo/oha — RepoPilot