RepoPilotOpen in app →

nkaz001/hftbacktest

Free, open source, a high frequency trading and market making backtesting and trading bot, which accounts for limit orders, queue positions, and latencies, utilizing full tick data for trades and order books(Level-2 and Level-3), with real-world crypto trading examples for Binance and Bybit

Healthy

Healthy across all four use cases

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 5mo ago
  • 5 active contributors
  • MIT licensed
Show all 7 evidence items →
  • CI configured
  • Tests present
  • Slowing — last commit 5mo ago
  • Single-maintainer risk — top contributor 84% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/nkaz001/hftbacktest)](https://repopilot.app/r/nkaz001/hftbacktest)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/nkaz001/hftbacktest on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: nkaz001/hftbacktest

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/nkaz001/hftbacktest shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 5mo ago
  • 5 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Slowing — last commit 5mo ago
  • ⚠ Single-maintainer risk — top contributor 84% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live nkaz001/hftbacktest repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/nkaz001/hftbacktest.

What it runs against: a local clone of nkaz001/hftbacktest — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in nkaz001/hftbacktest | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 165 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>nkaz001/hftbacktest</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of nkaz001/hftbacktest. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/nkaz001/hftbacktest.git
#   cd hftbacktest
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of nkaz001/hftbacktest and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "nkaz001/hftbacktest(\\.git)?\\b" \\
  && ok "origin remote is nkaz001/hftbacktest" \\
  || miss "origin remote is not nkaz001/hftbacktest (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "hftbacktest/src/lib.rs" \\
  && ok "hftbacktest/src/lib.rs" \\
  || miss "missing critical file: hftbacktest/src/lib.rs"
test -f "py-hftbacktest/src/lib.rs" \\
  && ok "py-hftbacktest/src/lib.rs" \\
  || miss "missing critical file: py-hftbacktest/src/lib.rs"
test -f "Cargo.toml" \\
  && ok "Cargo.toml" \\
  || miss "missing critical file: Cargo.toml"
test -f "collector/src/main.rs" \\
  && ok "collector/src/main.rs" \\
  || miss "missing critical file: collector/src/main.rs"
test -f "connector/src/connector.rs" \\
  && ok "connector/src/connector.rs" \\
  || miss "missing critical file: connector/src/connector.rs"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 165 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~135d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/nkaz001/hftbacktest"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

A high-frequency trading (HFT) and market-making backtesting framework written primarily in Rust with Python bindings that simulates trading strategies with microsecond-level accuracy by modeling feed latency, order latency, and queue position effects on order fills using full tick data from Level-2 and Level-3 order books. It includes a live trading bot connector for Binance Futures and Bybit to deploy the same backtested algorithms to production. Monorepo with five workspace members (Cargo.toml): hftbacktest (core Rust backtesting engine), hftbacktest-derive (procedural macros), py-hftbacktest (Python bindings), collector (tick data ingestion from exchanges), and connector (live trading bot for Binance/Bybit). The collector and connector are separate, exchange-specific binary tools; the core engine sits in hftbacktest/ and is consumed by both the Python layer and the live bot.

👥Who it's for

Quantitative traders, algorithmic trading researchers, and market makers developing HFT strategies who need accurate tick-by-tick backtesting that accounts for real-world latency and queue effects rather than simplified fill assumptions; also crypto traders at crypto funds who need fast backtesting in Python (via Numba JIT) with Rust-level performance.

🌱Maturity & risk

Actively developed and production-ready: the project has real documentation with tutorials, Python package distribution on PyPI, GitHub CI/CD workflows (CodeQL, release automation), and multi-exchange support (Binance Spot/Futures/CM, Bybit, Hyperliquid). The large Rust codebase (896KB) and substantial Python layer (285KB) suggest significant maturity, though the single-maintainer structure and lack of visible release tags in the file list warrant care.

Standard open source risks apply.

Active areas of work

The project is actively maintained with release automation (.github/workflows/release-python.yml) and CodeQL scanning. The ROADMAP.md exists, suggesting planned work; collector and connector modules include recent exchanges like Hyperliquid. Documentation is hosted on ReadTheDocs (.readthedocs.yml), indicating continuous doc updates. No specific PR/issue data visible, but the presence of multi-exchange http.rs clients (binance, bybit, hyperliquid) suggests recent feature expansion.

🚀Get running

git clone https://github.com/nkaz001/hftbacktest.git
cd hftbacktest
cargo build --release
# For Python bindings:
cd py-hftbacktest
pip install -e .
# Or install from PyPI: pip install hftbacktest

Daily commands: For backtesting: cargo run --release --bin hftbacktest (core engine); for Python strategy development: write a Python script using hftbacktest API and run with python strategy.py. For live trading: use the Rust connector binary with exchange credentials in a TOML config (see connector/examples/*.toml). Data collection: use collector/src/main.rs to fetch tick data from exchanges.

🗺️Map of the codebase

  • hftbacktest/src/lib.rs — Core library entry point; exposes the backtesting engine, order book reconstruction, and latency/fill models that all strategies depend on.
  • py-hftbacktest/src/lib.rs — Rust–Python FFI binding layer; enables Numba JIT integration and all Python API calls into the Rust backtesting engine.
  • Cargo.toml — Workspace configuration defining all member crates (hftbacktest, collector, connector) and shared build profiles for release optimization.
  • collector/src/main.rs — Data collection entrypoint for fetching full tick data (trades, Level-2/3 order books) from Binance, Bybit, and Hyperliquid.
  • connector/src/connector.rs — Live trading connector abstraction; bridges backtesting strategies to real exchanges via REST and WebSocket streams.
  • hftbacktest-derive/src/lib.rs — Procedural macros for derializing market data structures; critical for tick-by-tick simulation performance.
  • docs/conf.py — Documentation build configuration; includes Python API surface area and real-world examples for strategy development.

🛠️How to make changes

Add a new exchange connector (e.g., Kraken, Coinbase)

  1. Create new module under connector/src/exchange_name/ (connector/src/exchange_name/mod.rs)
  2. Implement REST API client for order submission and account queries (connector/src/exchange_name/rest.rs)
  3. Implement public WebSocket stream parser for trades and L2/L3 order book updates (connector/src/exchange_name/public_stream.rs)
  4. Implement order manager to track fills and reconcile with exchange state (connector/src/exchange_name/ordermanager.rs)
  5. Integrate new connector into connector/src/connector.rs as a new exchange variant (connector/src/connector.rs)

Add a new latency model for order fills

  1. Create new model struct implementing the latency trait in hftbacktest/src/ (hftbacktest/src/latency_models.rs)
  2. Implement fill probability calculation based on order queue position and market conditions (hftbacktest/src/order.rs)
  3. Add Python wrapper in py-hftbacktest/src/lib.rs to expose model in backtesting config (py-hftbacktest/src/lib.rs)
  4. Document model parameters and calibration guidance in docs/ (docs/order_fill.rst)

Add a new data source (e.g., Tardis, Databento alternative)

  1. Create new collector module under collector/src/data_source_name/ (collector/src/data_source_name/mod.rs)
  2. Implement HTTP client for fetching tick data batches (collector/src/data_source_name/http.rs)
  3. Parse response and normalize to hftbacktest's internal tick format (collector/src/data_source_name/mod.rs)
  4. Add data utility module in py-hftbacktest for file I/O and validation (py-hftbacktest/python/hftbacktest/data/utils/data_source_name.py)

Create a custom backtesting strategy in Python

  1. Define a Numba-compatible function decorated with @strategy that takes (hbt, state, ...) parameters (examples/CustomStrategy.ipynb)
  2. Use hft.Backtest() to initialize with tick data and market configuration (py-hftbacktest/python/hftbacktest/__init__.py)
  3. Call strategy function and run() to execute tick-by-tick simulation (py-hftbacktest/python/hftbacktest/backtest.py)
  4. Query stats (PnL, Sharpe, max drawdown) via backtest.stats object (py-hftbacktest/python/hftbacktest/stats.py)

🔧Why these technologies

  • Rust + PyO3 — Tick-by-tick backtesting must process millions of events per second; Rust's zero-cost abstractions and PyO3's seamless FFI allow Python strategy code to call compiled Rust engine without GIL contention.
  • Numba JIT (Python) — Strategy inner loops benefit from LLVM JIT compilation; Numba-compatible interface allows strategies to run 100–1000x faster than pure Python while remaining readable.
  • WebSocket + REST (Binance, Bybit, Hyperliquid) — Live trading connector must handle both streaming quotes (WebSocket for latency) and order submission (REST for reliability); each exchange has custom message formats requiring adapter pattern.
  • Full order book reconstruction (L2/L3) — Realistic queue position and fill probability models require accurate market depth; L3 (order-by-order) gives exact queue position, L2 allows interpolation-based estimation.

⚖️Trade-offs already made

  • Tick-by-tick replay vs. event-driven simulation

    • Why: Tick-by-tick replay is deterministic and reproducible, making latency injection and fill modeling consistent; event-driven would be faster but harder to debug and validate against live execution.
    • Consequence: Backtest runtime scales with tick volume (millions of ticks = hours/days for multi-year studies); mitigated by optional time-interval skipping and Numba acceleration.
  • Separate collector, connector, and backtester binaries

    • Why: Decoupled concerns allow independent optimization: collector prioritizes API availability; connector handles live order routing; backtester is CPU-bound and can run offline.
    • Consequence: Operational complexity (three binaries to manage); benefit is modularity—can upgrade collector without touching live trading connector.
  • Custom latency/fill models vs. pre-built ones

    • Why: Generic models (probabilistic queue, simple latency distributions) work for most strategies; custom models allow edge-case handling (exchange-specific behavior, hardware latency profiles).
    • Consequence: API surface is larger; contributors must understand model trait boundaries and Rust generics.
  • Multi-asset / multi-exchange in single backtest

    • Why: Realistic market-making and arbit
    • Consequence: undefined

🪤Traps & gotchas

Exchange credentials: live trading via connector requires API keys/secrets in TOML config files (see connector/examples/*.toml)—never commit these. Tick data format: collector outputs binary/CSV tick data; backtest engine expects specific schema; mismatch causes silent failures. Numba compilation: first run of backtesting in Python triggers JIT compilation, causing long startup; subsequent runs are fast—don't mistake the first run for a performance problem. Order latency models: default models are crypto-specific (exchange-dependent latencies); using on other assets without tuning latencies invalidates results. Python version: py-hftbacktest likely requires Python 3.8+ due to PyO3; check release notes. Memory: full Level-3 order book reconstruction for multi-asset backtests can be memory-intensive; no explicit documentation on heap limits.

🏗️Architecture

💡Concepts to learn

  • Market-By-Price (Level-2) vs Market-By-Order (Level-3) Order Books — hftbacktest uniquely reconstructs full order books from both feed types; understanding the difference is essential for accurate fill simulation and queue position modeling
  • Order Queue Position and Fill Probability — Core differentiator of hftbacktest: simulating where your order sits in the queue and odds of fill is critical for realistic HFT backtests; naive models assume instant fills
  • Feed Latency and Order Latency Modeling — hftbacktest allows custom models for both feed latency (data delay) and order latency (order submission to exchange); misconfiguring these makes backtests unrealistic
  • Tick-by-Tick Simulation vs Interval-Based Simulation — hftbacktest supports both modes; tick-by-tick is more accurate but slower; understanding the tradeoff is needed to choose the right backtest mode
  • PyO3 and Rust FFI — py-hftbacktest bridges Python strategy code to Rust simulation engine via PyO3; understanding this FFI is essential when extending or debugging the Python bindings
  • Numba JIT Compilation — Strategies run inside Numba JIT functions for speed; this imposes restrictions on Python code (no objects, limited NumPy) and requires awareness of JIT limitations
  • Multi-Exchange and Multi-Asset Backtesting — hftbacktest supports simultaneous simulation across Binance and Bybit with different tickers; requires careful handling of order routing and synchronization across feeds
  • kernc/backtesting.py — Pure-Python HFT backtesting framework; good alternative if you want simpler setup but less accurate latency/queue modeling than hftbacktest
  • tensortrade-org/tensortrade — Python-based trading backtest engine with reinforcement learning; overlaps on strategy backtest but less focus on microsecond latency accuracy
  • ccxt/ccxt — Unified crypto exchange API wrapper; commonly used alongside hftbacktest to fetch market data and manage live orders across exchanges
  • project-serum/serum-dex — Solana DEX implementation; relevant reference for understanding on-chain order book mechanics and fill simulation concepts

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add integration tests for connector exchanges (Binance Futures, Binance Spot, Bybit)

The connector/ module has REST and WebSocket implementations for multiple exchanges (binancefutures, binancespot, bybit) but there are no visible integration or unit tests. Adding tests would validate that market data streams, order manager operations, and message parsing work correctly across all three exchanges. This is critical for a trading bot framework where bugs could result in financial losses.

  • [ ] Create connector/tests/ directory with exchange-specific test modules
  • [ ] Add unit tests for connector/src/binancefutures/msg/stream.rs parsing logic
  • [ ] Add unit tests for connector/src/bybit/msg.rs message deserialization
  • [ ] Add tests for connector/src/*/ordermanager.rs order lifecycle (create, update, cancel)
  • [ ] Add mock server tests for REST endpoints in connector/src/*/rest.rs
  • [ ] Update connector/Cargo.toml to include dev-dependencies for mocking libraries

Add error handling and recovery tests for the collector module

The collector/ module fetches data from multiple exchanges (Binance, Binance Futures, Bybit, Hyperliquid) but collector/src/error.rs exists with minimal visibility into error scenarios. A new contributor should add tests covering network failures, API rate limiting, malformed responses, and file I/O errors to ensure robust data collection. This is essential since corrupted/incomplete historical data breaks backtesting accuracy.

  • [ ] Add tests in collector/tests/ for error.rs error type coverage
  • [ ] Add unit tests for collector/src/binance/http.rs HTTP error handling and retries
  • [ ] Add unit tests for collector/src/file.rs covering write failures and disk space issues
  • [ ] Add integration tests for collector/src/throttler.rs rate limiting behavior
  • [ ] Add tests covering malformed JSON responses from each exchange (binance, bybit, hyperliquid)
  • [ ] Document error scenarios in collector/README.md or update connector/README.md with error handling guide

Add Python-Rust FFI tests and benchmarks for py-hftbacktest

The workspace includes py-hftbacktest for Python bindings but the file structure shows no visible test directory for the Python module. Given that hftbacktest is designed to work in Numba JIT functions, it's critical to add tests validating the Python API and performance characteristics. This ensures the Python interface correctly exposes the Rust backtesting engine and maintains performance across Python versions.

  • [ ] Create py-hftbacktest/tests/ directory with test_*.py files
  • [ ] Add tests for Python API bindings matching hftbacktest Rust public APIs
  • [ ] Add performance benchmarks comparing Numba JIT execution with native Rust execution
  • [ ] Add tests validating full order book reconstruction via Python interface
  • [ ] Add tests for feed and order latency modeling via Python API
  • [ ] Add GitHub Actions workflow .github/workflows/test-python.yml for py-hftbacktest (note: only codeql.yml, release-python.yml, stale.yml currently exist)

🌿Good first issues

  • Add unit tests for collector/src/hyperliquid/http.rs similar to the pattern in other exchange modules—currently hyperliquid integration is present but test coverage is unknown.
  • Create a tutorial or example notebook demonstrating how to add a custom latency model to the core Rust engine (extend the latency trait); would reduce friction for users wanting custom market conditions.
  • Document the exact tick data schema (fields, binary format, CSV columns) expected by hftbacktest/src/ in a schema reference file; data ingestion errors are currently hard to debug.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 5f3ec40 — fix: update the order status to Status::New when the order is modified. (It may later be changed to Status::Replaced (nkaz001)
  • 3a3d1be — fix: fix an issue where side is ignored in submit_order. (#298) (nkaz001)
  • 75e25b5 — feat: update bybit feed stream collection. (nkaz001)
  • 20b6f43 — Merge pull request #291 from choosemycoins/fix/fix-bybit-live--orderbook-depth (nkaz001)
  • a244a14 — ci: update macos runner. (nkaz001)
  • 5a2f636 — ci: update macos runner. (nkaz001)
  • 7c2b953 — fix: fixed orderbook depth level in bybit connector (choosemycoins)
  • a4017f2 — chore: upgrade dependencies. (nkaz001)
  • ddfd5e3 — chore: update Rust version to 1.91.1. (nkaz001)
  • 4d700e8 — chore: bump project version. (nkaz001)

🔒Security observations

  • Medium · Insufficient Input Validation in HTTP Clients — collector/src/binance/http.rs, collector/src/bybit/http.rs, collector/src/hyperliquid/http.rs. Multiple HTTP client implementations (Binance, Bybit, Hyperliquid) in the collector module make external API calls. Without visible input validation and sanitization, there's a risk of injection attacks if user-controlled data is passed to API endpoints without proper validation. Fix: Implement strict input validation for all user-supplied parameters before constructing API requests. Use allowlists for acceptable values and validate data types, ranges, and formats. Sanitize any data that will be used in URLs or request bodies.
  • Medium · Potential Hardcoded Credentials in Configuration Files — connector/examples/binancefutures.toml, connector/examples/binancespot.toml, connector/examples/bybit.toml. Configuration files in connector/examples/ (binancefutures.toml, binancespot.toml, bybit.toml) may contain examples with API keys or secrets. If developers copy these files for actual use without removing example credentials, sensitive data could be exposed. Fix: Ensure example configuration files use placeholder values (e.g., 'YOUR_API_KEY_HERE') and document the requirement to replace them. Implement .gitignore rules to prevent accidental commits of real configuration files. Use environment variables or secure credential management systems instead of storing secrets in files.
  • Medium · Missing Rate Limiting Implementation — collector/src/throttler.rs, collector/src/main.rs. While a throttler.rs module exists, the effectiveness of rate limiting for external API calls is not clearly visible in the file structure. Inadequate rate limiting could lead to API abuse, DDoS-like behavior, or account suspension from exchanges. Fix: Verify that the throttler module properly implements rate limiting per exchange requirements. Implement exponential backoff strategies for failed requests. Add monitoring and logging of rate limit headers from API responses.
  • Low · Unsafe Panic Behavior in Release Mode — Cargo.toml (release profile). The release profile is configured with panic = "abort" which can terminate the entire process without proper cleanup or error recovery, potentially leaving trading systems in an inconsistent state or losing critical data. Fix: Consider using panic = "unwind" for release builds to allow proper error handling and cleanup routines to execute. Alternatively, ensure comprehensive error handling throughout the codebase with explicit logging before any panic occurs.
  • Low · Overflow Checks Disabled in Release Mode — Cargo.toml (release profile). The release profile has overflow-checks = false, which means integer overflow/underflow conditions won't be detected. In financial trading applications, this could lead to incorrect calculations or loss of funds. Fix: Enable overflow checks in release builds (overflow-checks = true) as financial applications require strict arithmetic validation. Profile the performance impact and optimize algorithms instead of disabling safety checks.
  • Low · Debug Assertions Disabled in Production — Cargo.toml (release profile). The release profile disables debug assertions (debug-assertions = false), which could hide logical errors in production that were caught during testing. Fix: Consider enabling debug assertions or implementing custom assertion-like checks for critical financial calculations. Use feature flags to enable additional validation in critical paths even in release mode.
  • Low · Missing Security Documentation — Repository root. No visible security.md, SECURITY.md, or security policy documentation in the repository. Users may not know how to report security vulnerabilities responsibly. Fix: Create a SECURITY.md file documenting the security reporting process, supported versions, and any known security considerations. Add security guidelines to the README or documentation.
  • Low · Potential Unvalidated WebSocket Connections — connector/src/binancefutures/market_data_stream.rs, connector/src/binancefutures/user_data_stream.rs, connector/src/binancespot/*, connector/src/bybit/*. Market data stream and user data stream modules (market_data_stream.rs, user_data_stream.rs) handle WebSocket connections for real-time exchange data. Without proper certificate validation and connection security measures, MITM attacks could be possible. Fix: undefined

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · nkaz001/hftbacktest — RepoPilot