RepoPilotOpen in app →

perspective-dev/perspective

A data visualization and analytics component, especially well-suited for large and/or streaming datasets.

Healthy

Healthy across all four use cases

Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 3d ago
  • 4 active contributors
  • Apache-2.0 licensed
Show 4 more →
  • CI configured
  • Tests present
  • Small team — 4 contributors active in recent commits
  • Single-maintainer risk — top contributor 97% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/perspective-dev/perspective)](https://repopilot.app/r/perspective-dev/perspective)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/perspective-dev/perspective on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: perspective-dev/perspective

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/perspective-dev/perspective shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 3d ago
  • 4 active contributors
  • Apache-2.0 licensed
  • CI configured
  • Tests present
  • ⚠ Small team — 4 contributors active in recent commits
  • ⚠ Single-maintainer risk — top contributor 97% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live perspective-dev/perspective repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/perspective-dev/perspective.

What it runs against: a local clone of perspective-dev/perspective — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in perspective-dev/perspective | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 33 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>perspective-dev/perspective</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of perspective-dev/perspective. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/perspective-dev/perspective.git
#   cd perspective
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of perspective-dev/perspective and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "perspective-dev/perspective(\\.git)?\\b" \\
  && ok "origin remote is perspective-dev/perspective" \\
  || miss "origin remote is not perspective-dev/perspective (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "Cargo.toml" \\
  && ok "Cargo.toml" \\
  || miss "missing critical file: Cargo.toml"
test -f "README.md" \\
  && ok "README.md" \\
  || miss "missing critical file: README.md"
test -f "DEVELOPMENT.md" \\
  && ok "DEVELOPMENT.md" \\
  || miss "missing critical file: DEVELOPMENT.md"
test -f ".github/workflows/build.yaml" \\
  && ok ".github/workflows/build.yaml" \\
  || miss "missing critical file: .github/workflows/build.yaml"
test -f "docs/md/explanation/architecture.md" \\
  && ok "docs/md/explanation/architecture.md" \\
  || miss "missing critical file: docs/md/explanation/architecture.md"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 33 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~3d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/perspective-dev/perspective"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Perspective is a high-performance interactive analytics and data visualization engine built in C++/Rust and compiled to WebAssembly, Python, and Rust. It provides a framework-agnostic Custom Element UI (web component) paired with a pluggable Data Model API that can query external sources like DuckDB while supporting fast columnar streaming via Apache Arrow with a built-in expression language for computed columns. Multi-language monorepo with Cargo workspace (see Cargo.toml) managing Rust crates: rust/perspective (core), rust/perspective-client, rust/perspective-server, rust/perspective-viewer, and rust/perspective-python. Frontend code lives in TypeScript packages (CONTRIBUTING.md and docs structure suggest packages/ layout). C++ core is compiled to WASM and native binaries. Dev environment managed via devcontainer (see .devcontainer/devcontainer.json).

👥Who it's for

Full-stack developers, data scientists, and BI teams building real-time dashboards, JupyterLab notebooks, and streaming analytics applications who need memory-efficient querying of large datasets without writing custom chart or table rendering code. Both web frontend developers (using the Custom Element) and Python/Rust backend developers (using the native bindings).

🌱Maturity & risk

Actively developed and production-ready. The monorepo shows strong CI/CD coverage (build.yaml workflow), organized multi-language build system (Cargo workspace + npm), and comprehensive language support (C++, Rust, TypeScript, Python). The presence of governance docs (GOVERNANCE.md, PROJECT_CHARTER.md) and security policy (SECURITY.md) indicates mature project stewardship.

Low-to-medium risk for adopters but requires some infrastructure knowledge. The C++/Rust core means platform-specific compilation is necessary (mitigated by WebAssembly), and the monorepo spans 5 languages which increases maintainer cognitive load. No obvious red flags from the file structure, but streaming APIs and virtual server integrations (ClickHouse, DuckDB) add surface area for edge cases.

Active areas of work

Not visible from static file list alone, but the presence of GitHub Actions (build.yaml) with push triggers and the organized .github/workflows structure suggests active CI/CD. CHANGELOG.md and DEVELOPMENT.md indicate ongoing version management. Workspace has multiple recent Rust crates (perspective-viewer, perspective-server) suggesting feature development in visualization and remote execution.

🚀Get running

Clone the repo: git clone https://github.com/perspective-dev/perspective.git && cd perspective. Install dependencies via devcontainer or manually (see .devcontainer/postcreate.sh for setup steps). Use cargo build for Rust crates and pnpm install for TypeScript (inferred from .pnpmrc presence). Check DEVELOPMENT.md for exact build commands.

Daily commands: For browser dev: pnpm install && pnpm dev (check workspace root package.json). For Rust: cargo build --release -p perspective. For Python: build wheels via python setup.py build_ext or install from PyPI. For Jupyter: install perspective-python and import widget. Full steps in DEVELOPMENT.md.

🗺️Map of the codebase

  • Cargo.toml — Root workspace configuration defining all Rust crates and dependencies; essential for understanding the multi-language build system and core engine dependencies.
  • README.md — High-level overview of Perspective's capabilities, architecture, and multi-platform support (JavaScript, Python, Rust); mandatory reading for context.
  • DEVELOPMENT.md — Development workflow, build instructions, and local setup for both Rust and JavaScript components; required for contributing any code.
  • .github/workflows/build.yaml — CI/CD pipeline defining test, build, and release processes across Rust, Python, and JavaScript; controls quality gates for all PRs.
  • docs/md/explanation/architecture.md — Core architectural documentation explaining client-only, server-only, and client-server deployments; foundation for understanding data flow.
  • .pnpmrc — Package manager configuration for the JavaScript monorepo; required for installing dependencies and building frontend components.
  • Cargo.lock — Locked Rust dependencies ensuring reproducible builds across the core C++/Wasm engine and Rust bindings.

🛠️How to make changes

Add a New Data Source / Virtual Server

  1. Document the new server type in the architecture explanation (docs/md/explanation/virtual_servers.md)
  2. Create a how-to guide for the new virtual server with configuration examples (docs/md/how_to/javascript/virtual_server/custom.md)
  3. Add the server implementation as a new workspace member in Cargo.toml if Rust-based, or add to JavaScript packages (Cargo.toml)
  4. Update CI/CD to test and build the new server variant (.github/workflows/build.yaml)

Add a New Visualization or View Configuration Option

  1. Document the new view configuration option in the view configuration guide (docs/md/explanation/view/config/selection_and_ordering.md)
  2. Add API reference documentation for the new option (docs/md/api_reference.md)
  3. Create a how-to example in the JavaScript guide (docs/md/how_to/javascript/viewer.md)
  4. Update the core engine (Rust crates in Cargo.toml workspace) to support the new view configuration (Cargo.toml)

Implement a New Language Binding (e.g., Ruby, Go)

  1. Create a new crate in the Rust workspace for the FFI layer (Cargo.toml)
  2. Add build and test steps for the new binding in CI/CD (.github/workflows/build.yaml)
  3. Document the new binding in getting-started and how-to guides (docs/md/getting_started.md)
  4. Add language-specific installation and usage guides (docs/md/how_to)

Add a Custom Expression or Aggregation Function

  1. Document the new function in the expressions guide (docs/md/explanation/view/config/expressions.md)
  2. Update API reference with function signature and examples (docs/md/api_reference.md)
  3. Implement the function in the core Rust engine (update Cargo workspace members) (Cargo.toml)
  4. Add unit and integration tests via CI/CD (.github/workflows/build.yaml)

🔧Why these technologies

  • Rust + WebAssembly (core engine) — Provides high-performance columnar computation and memory-efficient data processing; compiled to WASM for browser execution or native binaries for servers.
  • JavaScript (client bindings & UI) — Primary client-side interface; enables interactive dashboards, notebooks, and web applications with framework-agnostic bindings.
  • Python bindings — Integrates with data science / ML workflows (Jupyter, Pandas) and enables server-side computation for large datasets.
  • Apache Arrow (IPC format) — Zero-copy, language-agnostic columnar serialization for efficient data transfer between processes and over the network.
  • Virtual Servers (DuckDB, ClickHouse, custom) — Enables lazy evaluation and remote computation; delegates heavy lifting to specialized analytical databases without loading entire datasets into memory.

⚖️Trade-offs already made

  • Columnar (not row-oriented) in-memory storage

    • Why: Optimizes analytical queries and aggregations; better cache locality and vectorization.
    • Consequence: Slower for single-row updates; requires incremental aggregation strategies rather than full table rebuilds.
  • Client-only vs. server-only vs. hybrid deployment models

    • Why: Flexibility to handle datasets of any size: small data stays in browser, large data stays on server, medium data can flow either direction.
    • Consequence: Developers must choose deployment topology upfront; no single best model for all use cases.
  • Streaming incremental updates rather than full table replacement

    • Why: Enables efficient handling of unbounded data streams and real-time dashboards; minimizes bandwidth.
    • Consequence: Requires careful state management and ordering semantics; edge cases around late-arriving data.
  • Expression-based aggregations instead of hard-coded functions

    • Why: Maximizes flexibility for custom metrics and reduces API surface area.
    • Consequence: Expression parsing and validation adds latency; potential for user error in complex expressions.

🪤Traps & gotchas

Multi-language builds: C++ compilation requires platform-specific toolchain (clang); WASM builds need emsdk. Monorepo interdependencies: Cargo path dependencies (see [patch.crates-io]) mean local crate changes require careful ordering. WebAssembly size: WASM binary may be large; check optimization profiles in Cargo.toml (lto=true, strip=true for release). Streaming protocol: Custom WebSocket protocol between client/server; not HTTP/REST, so proxy configs may conflict. Python GIL: Native Rust bindings bypass GIL but require thread-safe Arc wrappers; async/await in Python client may have sharp edges.

🏗️Architecture

💡Concepts to learn

  • Apache Arrow Columnar Format — Perspective's entire data representation and streaming protocol is built on Arrow; understanding columnar vs row-oriented storage is critical for optimizing large dataset queries.
  • WebAssembly (WASM) Compilation — The C++ core compiles to WASM for browser delivery; you need to understand WASM module boundaries, size tradeoffs, and JavaScript-WASM interop to modify the engine.
  • Custom Elements (Web Components) — Perspective's UI is delivered as a framework-agnostic Custom Element; understanding the Custom Element lifecycle (connectedCallback, attributeChangedCallback) is essential for UI modifications.
  • Pluggable Data Model API — The core abstraction allowing Perspective UI to query DuckDB, ClickHouse, or custom backends; understanding this interface lets you add new data sources without modifying the visualization layer.
  • ExprTK Expression Language — Used for computed columns and filters; it's a domain-specific language embedded in the query engine, so extending expression capabilities requires modifying the parser/evaluator.
  • Monorepo Workspace Management — The Cargo workspace (Cargo.toml) and npm/pnpm monorepo structure require understanding cross-crate dependencies and patched versions ([patch.crates-io]); missteps break multi-language builds.
  • Streaming and Real-Time Updates — Perspective's key differentiator is efficient streaming of incremental updates (appends/upserts) without full table recomputation; understanding the update protocol is critical for server integrations.
  • apache/arrow — Perspective's columnar format and streaming protocol are built on Apache Arrow; understanding Arrow data layout is essential for performance optimization.
  • plotly/plotly.js — Alternative JavaScript visualization library; Perspective differentiates by adding data grid + streaming + server-side query engine, but both compete for dashboard developers.
  • jupyterlab/jupyterlab — Perspective includes a native JupyterLab widget; understanding the widget protocol helps extend notebook integration.
  • duckdb/duckdb — Perspective's pluggable Data Model API explicitly supports DuckDB as a virtual server backend; users often pair them for analytical workflows.
  • ClickHouse/ClickHouse — Like DuckDB, ClickHouse is a supported backend via Virtual Server; important ecosystem integration for large-scale analytics.

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive CI workflow for Rust workspace member validation

The repo has a Cargo workspace with 11 members (lint, metadata, bundle, perspective, perspective-client, perspective-js, perspective-python, perspective-server, perspective-viewer, and examples/rust-axum) but the build.yaml workflow doesn't explicitly show per-member testing. A dedicated workflow should validate each workspace member builds independently, runs tests, and checks formatting with clippy for the specific Rust packages. This ensures no internal dependency issues or incomplete member configurations.

  • [ ] Review .github/workflows/build.yaml to see current Rust testing coverage
  • [ ] Create a new .github/workflows/rust-workspace.yaml that runs 'cargo build', 'cargo test', and 'cargo clippy' for each member in Cargo.toml members array
  • [ ] Add matrix strategy for different Rust versions (MSRV + latest)
  • [ ] Validate profile.dev and profile.release settings don't conflict with member-specific Cargo.toml files
  • [ ] Test with examples/rust-axum as a real-world example

Add architecture decision records (ADRs) in docs/md/explanation/architecture/

The repo has detailed architecture diagrams (architecture.svg, client_only.md, client_server.md, server_only.md) but lacks formalized ADRs explaining why these architectural decisions were made. This is critical for contributors understanding trade-offs, especially for features involving the Rust backend, Python bindings, and JavaScript client. ADRs would document decisions around Cargo workspace structure, client-server protocol design, and streaming dataset handling.

  • [ ] Create docs/md/explanation/architecture/adr/ directory
  • [ ] Write ADR-001: Rust-based backend architecture rationale (vs alternatives)
  • [ ] Write ADR-002: Client-server protocol design for streaming datasets
  • [ ] Write ADR-003: Multi-language bindings strategy (JS, Python, Rust client)
  • [ ] Reference ADRs from docs/md/SUMMARY.md and main architecture.md

Add integration tests validating table operations across Python/Rust/JS boundary

The repo spans Rust (core), Python (perspective-python), and JavaScript (perspective-js) with bindings. There are likely unit tests in each layer, but integration tests validating data flow from Python → Rust core → JS client (especially for streaming/updates) are missing or not visible in the file structure. This catches serialization/deserialization bugs early and prevents regressions in cross-language interop.

  • [ ] Examine existing test structure in rust/perspective-python/ and rust/perspective-js/ directories
  • [ ] Create rust/perspective-python/tests/integration_test_round_trip.py that creates a table via Python API, verifies Rust core state, and validates JS client can consume it
  • [ ] Add test case for streaming updates: Python pushes rows → Rust processes → JS receives updates in correct order
  • [ ] Create a corresponding Rust integration test in rust/perspective/tests/ validating serialization format
  • [ ] Add new test job to .github/workflows/build.yaml to run these integration tests

🌿Good first issues

  • Add TypeScript tests for the DuckDB and ClickHouse Virtual Server integrations mentioned in README but not obviously covered in visible file structure; start in docs/md/ to find API reference gaps.
  • Document the ExprTK expression language syntax with examples in DEVELOPMENT.md (currently only mentioned in README); useful for users learning computed columns but no dedicated guide visible.
  • Create a minimal Rust example app in examples/rust-axum/ showing a complete server + client setup (repo has examples/ but structure suggests this is a template not filled in); helps Rust users get started faster.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • d282b71 — Merge pull request #3172 from perspective-dev/gh-issues-roulette (texodus)
  • ace5c72 — Expose contains for ExprTk, docs and tests (texodus)
  • 1bc0929 — Update pyarrow.Table docs and tests (texodus)
  • f736ce9 — Add coalesce ExprTK method (texodus)
  • 60903e8 — Merge pull request #3171 from perspective-dev/arrow-multi-batch (texodus)
  • 4c669fc — Fix multi-batch Arrow null validity bug (texodus)
  • f70d1be — Merge pull request #3168 from perspective-dev/multi-chart-webgl (texodus)
  • 2ab561b — Add multi-chart (texodus)
  • d923699 — Remove results.tar.gz for out-of-repo archives (texodus)
  • 42783ec — Merge pull request #3166 from perspective-dev/perspective-charts (texodus)

🔒Security observations

The Perspective codebase demonstrates generally good security practices with a defined vulnerability reporting process and hardened release profiles. However, there are concerns about incomplete dependency configuration in Cargo.toml and reliance on external Git repositories for critical dependencies like protobuf. The primary risks are around supply chain integrity and incomplete configuration management. The security score reflects moderate confidence in the current setup; improvements to dependency documentation, configuration validation, and deployment-mode-specific security guidance would strengthen the overall posture.

  • Medium · Incomplete Cargo.toml Patch Configuration — Cargo.toml - [patch.crates-io] section. The Cargo.toml file contains an incomplete patch entry for crates-io that ends abruptly with just 'p' on the last line. This suggests the file is truncated or malformed, which could lead to unexpected dependency resolution behavior or build failures. While not a direct security vulnerability, it indicates incomplete configuration that should be reviewed. Fix: Review and complete the patch configuration. Ensure all patch entries are properly defined with valid git references or local paths.
  • Medium · Protobuf Dependency from External Git Repository — Cargo.toml - protobuf-src patch. The protobuf-src dependency is pulled from a specific commit (1aba500e469f8bdc384a0fe9e69c189fda72e059) of an external GitHub repository. While the commit is pinned, this represents a supply chain risk if the repository or commit becomes unavailable or is compromised. Fix: Verify the source repository is trustworthy and maintained. Consider using official crates.io versions when available. Document why the custom fork is necessary. Regularly audit for upstream updates.
  • Low · Aggressive Security-First Release Profile Configuration — Cargo.toml - [profile.release]. The release profile uses LTO (link-time optimization), symbol stripping, and single codegen-unit. While these are security/optimization best practices, they can make debugging production issues more difficult. No line number information will be available in crashes. Fix: Consider maintaining debug symbols separately for production binaries to aid in post-mortem analysis. Use a separate profile for public releases if needed.
  • Low · Missing CVSS/Security Baseline Documentation — SECURITY.md. While SECURITY.md exists and defines a vulnerability reporting process, there is no documented security baseline, known limitations, or specific security configurations required for the various deployment modes (client-only, server-only, client-server). Fix: Enhance SECURITY.md to include: supported versions policy clarity, security architecture documentation for each deployment mode, known security limitations, and a security changelog.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · perspective-dev/perspective — RepoPilot