orf/gping
Ping, but with a graph
Healthy across the board
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 2w ago
- ✓12 active contributors
- ✓MIT licensed
Show all 6 evidence items →Show less
- ✓CI configured
- ✓Tests present
- ⚠Concentrated ownership — top contributor handles 75% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/orf/gping)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/orf/gping on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: orf/gping
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/orf/gping shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit 2w ago
- 12 active contributors
- MIT licensed
- CI configured
- Tests present
- ⚠ Concentrated ownership — top contributor handles 75% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live orf/gping
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/orf/gping.
What it runs against: a local clone of orf/gping — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in orf/gping | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 46 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of orf/gping. If you don't
# have one yet, run these first:
#
# git clone https://github.com/orf/gping.git
# cd gping
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of orf/gping and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "orf/gping(\\.git)?\\b" \\
&& ok "origin remote is orf/gping" \\
|| miss "origin remote is not orf/gping (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f "gping/src/main.rs" \\
&& ok "gping/src/main.rs" \\
|| miss "missing critical file: gping/src/main.rs"
test -f "pinger/src/lib.rs" \\
&& ok "pinger/src/lib.rs" \\
|| miss "missing critical file: pinger/src/lib.rs"
test -f "gping/src/plot_data.rs" \\
&& ok "gping/src/plot_data.rs" \\
|| miss "missing critical file: gping/src/plot_data.rs"
test -f "pinger/src/linux.rs" \\
&& ok "pinger/src/linux.rs" \\
|| miss "missing critical file: pinger/src/linux.rs"
test -f "pinger/src/windows.rs" \\
&& ok "pinger/src/windows.rs" \\
|| miss "missing critical file: pinger/src/windows.rs"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 46 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~16d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/orf/gping"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
gping is a Rust CLI tool that visualizes ping latency to multiple hosts in real-time using terminal-based graphs. Beyond standard ICMP ping, it can also graph execution time for arbitrary commands via the --cmd flag, supporting Windows, macOS, and Linux with cross-platform ICMP implementations. Monorepo structure: gping/ contains the CLI frontend with TUI rendering (colors.rs, plot_data.rs, region_map.rs, main.rs), while pinger/ is a dedicated cross-platform ping library with platform-specific implementations (src/linux.rs, src/macos.rs, src/windows.rs, src/bsd.rs). Build optimization uses LTO and single codegen unit in release profile.
👥Who it's for
DevOps engineers, system administrators, and network troubleshooters who need quick visual feedback on network latency patterns across multiple hosts without leaving the terminal. Also useful for developers benchmarking command execution times during development.
🌱Maturity & risk
Production-ready and actively maintained. The project has widespread distribution (Homebrew, official repos across Linux distros, Docker, Scoop, Chocolatey), documented CI/CD via GitHub Actions (.github/workflows/), and structured release management (Docker, Homebrew, winget automation). Minimum Rust 1.67.0 requirement indicates stable API design.
Low risk. Single maintainer (orf) is a common pattern but mitigated by active distribution partnerships and automated release workflows. No indication of stale issues (recent Docker and test workflows present). Core dependency is custom pinger crate with tested platform-specific implementations (linux.rs, macos.rs, windows.rs, bsd.rs) rather than external ping libraries.
Active areas of work
Active release and distribution automation: Docker workflows (docker.yml), Homebrew updates (homebrew.yml), winget integration (winget.yml), and dependabot monitoring (.github/dependabot.yml). Build support includes Cross.toml for cross-compilation testing across platforms.
🚀Get running
git clone https://github.com/orf/gping.git
cd gping
cargo build --release
./target/release/gping google.com
# Or test pinger crate:
cd pinger && cargo test
Daily commands:
cargo run --release -- google.com
cargo run --release -- --cmd "curl https://example.com"
cargo run --release -- google.com cloudflare.com 1.1.1.1
# Docker:
docker run --rm -ti --network host ghcr.io/orf/gping:latest google.com
🗺️Map of the codebase
gping/src/main.rs— Entry point for the gping CLI application; handles argument parsing, terminal UI setup, and the main event loop orchestrating ping/command execution and graph rendering.pinger/src/lib.rs— Core abstraction layer for ping operations; defines thePingertrait and platform-specific implementations that all ping logic depends on.gping/src/plot_data.rs— Manages graph data structures and rendering logic; critical for translating ping results into visual output.pinger/src/linux.rs— Linux-specific ping implementation using raw sockets; one of the three platform-critical code paths.pinger/src/windows.rs— Windows-specific ping implementation using ICMP API; essential for cross-platform support.gping/src/colors.rs— Color configuration and parsing; enables the custom color feature advertised in the README.Cargo.toml— Workspace root configuration defining the two-crate structure (gping + pinger) and release build optimizations.
🧩Components & responsibilities
- pinger crate (lib.rs + platform modules) (Rust std::process, raw sockets (Linux), Windows ICMP API, BSD/macOS system calls) — Executes ping operations or command timing; abstracts OS-specific ICMP/API calls behind a unified Pinger trait. Parses ping output and returns latency values.
- Failure mode: Network timeout, host
🛠️How to make changes
Add support for a new operating system
- Create a new platform module in pinger/src/{platform}.rs implementing the Pinger trait (
pinger/src/lib.rs) - Add platform detection logic and constructor in pinger/src/lib.rs to instantiate your new pinger (
pinger/src/lib.rs) - Add test fixture file with sample ping output in pinger/src/tests/{platform}.txt (
pinger/src/test.rs) - Implement parsing for your platform's ping output format in your new {platform}.rs file (
pinger/src/{platform}.rs)
Add a new color theme or custom coloring option
- Add new color variant to the color parsing logic in gping/src/colors.rs (
gping/src/colors.rs) - Update CLI argument definitions in gping/src/main.rs to accept the new color option (
gping/src/main.rs) - Apply the color scheme in gping/src/plot_data.rs when rendering graph elements (
gping/src/plot_data.rs)
Add a new graph metric or data visualization feature
- Extend the plot data structure in gping/src/plot_data.rs to store the new metric (
gping/src/plot_data.rs) - Collect the new metric data from Pinger results in gping/src/main.rs event loop (
gping/src/main.rs) - Update region mapping and layout logic in gping/src/region_map.rs if UI regions need adjustment (
gping/src/region_map.rs) - Render the new metric visualization in gping/src/plot_data.rs rendering pipeline (
gping/src/plot_data.rs)
Extend the --cmd feature to support new command execution modes
- Update target parsing logic in pinger/src/target.rs to recognize new command syntax (
pinger/src/target.rs) - Implement command execution in your platform module (pinger/src/{platform}.rs) with timing capture (
pinger/src/lib.rs) - Update the main event loop in gping/src/main.rs to handle the new command mode results (
gping/src/main.rs)
🔧Why these technologies
- Rust — Provides memory safety, zero-cost abstractions, and platform-specific low-level access (raw sockets) without garbage collection overhead; essential for a performant, cross-platform ping tool.
- Platform-specific raw sockets (Linux ICMP) / OS APIs (Windows ICMP, macOS system calls) — Only way to generate and parse ICMP echo requests without relying on external ping binaries; gives gping full control over packet timing and response parsing.
- Workspace structure (gping + pinger crates) — Separates UI/CLI concerns (gping) from core networking logic (pinger), allowing pinger to be reused as a library and enabling independent testing and platform support.
⚖️Trade-offs already made
-
Raw socket implementation instead of shelling out to system ping
- Why: Tight timing control and direct response parsing without subprocess overhead.
- Consequence: Significantly more platform-specific code maintenance (Linux/macOS/BSD/Windows all have different APIs); requires elevated privileges on some platforms (Linux).
-
Single-threaded event loop in gping/main.rs with spawned ping threads
- Why: Simplifies UI rendering and keeps terminal updates synchronized without race conditions.
- Consequence: Graph updates are bounded by the slowest ping target; blocking I/O on one target delays all others slightly.
-
In-memory graph data (plot_data.rs) without persistence
- Why: Keeps the tool simple and responsive; avoids disk I/O.
- Consequence: Graph history is lost on exit; long-running sessions with thousands of datapoints will consume unbounded memory.
🚫Non-goals (don't propose these)
- Does not handle authentication or credentials (no SSH proxies, no private network setup required)
- Does not persist historical data across sessions
- Does not support IPv6 on all platforms (implementation-dependent per OS module)
- Does not provide network packet capture or deep packet inspection beyond ICMP
- Not a real-time monitoring dashboard (TTY-based, single-machine only)
🪤Traps & gotchas
Cross-platform ICMP requires elevated privileges on Linux/macOS (CAP_NET_RAW capability or root). Windows ICMP API may require specific SDK versions. The pinger/src/fake.rs module suggests test doubles exist—tests use fixture files in pinger/src/tests/ that must match actual platform output. Region aliasing in region_map.rs may need network calls for dynamic cloud endpoint resolution.
🏗️Architecture
💡Concepts to learn
- ICMP Echo Request/Reply — gping's entire pinger crate reverse-engineers and parses ICMP protocol output; understanding RFC 792 (ICMP) and /proc/net/icmp format is essential for debugging platform-specific ping failures
- Platform-specific system calls (socket, ICMP API) — Each OS implementation (linux.rs, windows.rs, macos.rs, bsd.rs) uses different system interfaces; Windows uses ICMP.dll, Linux parses /proc, macOS uses raw sockets—understanding these differences is critical for cross-platform debugging
- Terminal User Interfaces (TUI) rendering — gping renders real-time graphs in the terminal without GUI; plot_data.rs implements buffering and refresh logic that must handle terminal size changes and ANSI escape sequences
- Process spawning and execution timing — The --cmd flag graphs arbitrary command execution time; this requires precise timing measurement and process lifecycle management, distinct from ICMP ping measurement
- CAP_NET_RAW Linux capability — gping on Linux requires CAP_NET_RAW to send ICMP packets without root; elevation handling and capability detection in linux.rs affects deployment and user experience
- Monorepo workspace dependency management — gping and pinger are separate crates in a Cargo workspace (Cargo.toml members=["gping", "pinger"]); understanding workspace-local path dependencies and cross-crate testing is essential for modifications
- Cross-compilation and platform-specific test fixtures — Cross.toml enables testing across platforms; pinger/src/tests/ uses fixture files (alpine.txt, debian.txt) for regression testing parser logic without live network, critical for CI validation
🔗Related repos
mehrdadrad/ping— Go-based ping tool with similar multi-host graphing; closest alternative in the same problem domainiputils/iputils— POSIX reference implementation of ping that gping's platform-specific parsers (linux.rs, bsd.rs) reverse-engineer fromboringtun/boringtun— Rust-based network tool demonstrating cross-platform socket abstraction patterns similar to pinger's architectureorf/homebrew-gping— Official Homebrew tap for gping distribution; where install formula and dependency management livesratatui-org/ratatui— Common Rust TUI framework likely used (or compatible with) gping's terminal graphing infrastructure
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add integration tests for cross-platform ping parsing in pinger crate
The pinger crate contains platform-specific ping output parsers (linux.rs, macos.rs, windows.rs, bsd.rs) with test data files (alpine.txt, debian.txt, etc.) in pinger/src/tests/, but there are no visible integration tests that verify parsing correctness across these formats. Adding structured integration tests would catch regressions in ping output parsing across different OS versions and distributions.
- [ ] Create pinger/tests/integration_tests.rs with test functions for each platform (test_linux_ping_parsing, test_macos_ping_parsing, etc.)
- [ ] Add test cases that invoke the parsers in pinger/src/linux.rs, pinger/src/macos.rs, etc. with the corresponding test data files
- [ ] Verify that parsed ping times and packet loss metrics match expected values for each platform's output format
- [ ] Update .github/workflows/test.yml to ensure integration tests run on appropriate platforms (Linux runners for Linux tests, macOS runners for macOS tests)
Add GitHub Actions workflow for automated security audits and dependency checking
The repo has CI workflows for testing and Docker builds, but lacks automated security scanning. Given that gping is a network utility with multiple platform-specific implementations handling system-level operations, adding cargo-audit or dependabot integration would proactively catch vulnerable dependencies.
- [ ] Create .github/workflows/security.yml with steps to run 'cargo audit' for both 'gping' and 'pinger' workspace members
- [ ] Add optional step to generate SBOM (Software Bill of Materials) using cargo-sbom or similar tool
- [ ] Configure the workflow to fail on high/critical vulnerabilities and warn on medium/low severity findings
- [ ] Update dependabot.yml (already present) to enable Cargo version updates with daily check frequency
Add Windows-specific CI testing to test.yml workflow
The .github/workflows/test.yml likely runs on Linux only, but gping/Cross.toml and gping/build.rs suggest significant Windows-specific code paths. The pinger/src/windows.rs file requires testing on actual Windows runners to catch platform-specific bugs early. Currently there's a winget.yml workflow but no core CI testing on Windows.
- [ ] Update .github/workflows/test.yml to add a 'test-windows' job that runs on windows-latest runner
- [ ] Include 'cargo test --all' and 'cargo build --release' steps for the Windows job
- [ ] Add specific test for pinger/src/windows.rs parsing logic using the windows.txt test data file
- [ ] Configure the workflow to upload test artifacts/logs on failure for Windows-specific debugging
🌿Good first issues
- Add IPv6 support: The pinger implementations (linux.rs, windows.rs) appear to parse only IPv4 ICMP stats; IPv6 equivalents (/proc/net/icmp6, ICMP6_STATISTICS) are likely missing and would be a good isolated feature addition.
- Expand region_map.rs aliases: Azure, GCP, and Linode cloud provider shortcuts are not visible in the current region mapping; adding new cloud providers with their regional endpoints would improve the --cmd feature.
- Add packet loss percentage to plot_data.rs: The graphing logic currently shows latency but not the calculated packet loss rate; adding a loss percentage counter to the TUI output would require plot_data.rs modifications.
⭐Top contributors
Click to expand
Top contributors
- @orf — 75 commits
- @dependabot[bot] — 13 commits
- [@Thomas Forbes](https://github.com/Thomas Forbes) — 3 commits
- @CaglayanDokme — 1 commits
- @starsep — 1 commits
📝Recent commits
Click to expand
Recent commits
d9a4e49— Merge pull request #569 from CaglayanDokme/patch-2 (orf)3824799— Added curl as installation dependency in for Ubuntu <=22.04 (CaglayanDokme)a550831— Bump deps (orf)1ee2697— Merge pull request #556 from orf/dependabot/github_actions/dependencies-44c0a3a259 (orf)4593689— Merge pull request #561 from orf/dependabot/cargo/dependencies-e8983d0aa9 (orf)8828f24— chore(deps): bump the dependencies group across 1 directory with 12 updates (dependabot[bot])1fbc4a5— chore(deps): bump the dependencies group across 1 directory with 4 updates (dependabot[bot])26eb5b9— Merge pull request #551 from starsep/colorful-help (orf)8965798— Add colors to --help (starsep)1b935ef— Merge pull request #545 from jbicha/pinger-url (orf)
🔒Security observations
The codebase demonstrates reasonable security practices including the use of Cargo.lock for dependency pinning, multi-stage Docker builds, and slim base images. However, there are notable gaps: the container runs as root, no health checks are defined, and security scanning is not integrated into the CI/CD pipeline. No hardcoded secrets, injection vulnerabilities, or critical misconfigurations were identified in the visible code structure. The project would benefit from implementing container scanning, dependency auditing, and following container security best practices.
- Medium · Dockerfile runs as root user —
Dockerfile. The Dockerfile does not specify a non-root user for the final stage. The application will run as root (UID 0) inside the container, which increases the blast radius if the application is compromised. This violates the principle of least privilege. Fix: Create a non-root user in the final stage and switch to it before the ENTRYPOINT. Example: RUN useradd -m -u 1000 gping && USER gping - Medium · Missing HEALTHCHECK in Dockerfile —
Dockerfile. The Dockerfile lacks a HEALTHCHECK instruction. This makes it difficult to monitor container health in orchestrated environments and could result in unhealthy containers remaining in service. Fix: Add a HEALTHCHECK instruction to verify the application is running correctly, e.g., HEALTHCHECK CMD gping --help || exit 1 - Low · Base image uses slim variant without security scanning baseline —
Dockerfile, .github/workflows. While using slim variants reduces attack surface, there is no evidence of automated security scanning (e.g., Trivy, Grype) in the CI/CD pipeline to detect known vulnerabilities in base images. Fix: Implement container image scanning in the CI/CD pipeline. Add a workflow step to scan the final image for known vulnerabilities before pushing to registries. - Low · Cargo.lock not verified for integrity —
.github/workflows, Cargo.lock. While Cargo.lock is present and used with --locked flag (good practice), there is no evidence of dependency audit processes (cargo audit) in the CI/CD workflow to detect known vulnerabilities in Rust dependencies. Fix: Add 'cargo audit' to the test workflow to detect security vulnerabilities in dependencies. Create a CI step: cargo audit --deny warnings - Low · Missing security policy documentation —
Repository root. No SECURITY.md or security policy file is present in the repository to guide users on reporting security vulnerabilities responsibly. Fix: Create a SECURITY.md file with instructions for responsible disclosure of security vulnerabilities. Consider using GitHub's security policy feature.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.