nerfstudio-project/nerfstudio
A collaboration friendly studio for NeRFs
Healthy across all four use cases
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 9mo ago
- ✓49+ active contributors
- ✓Distributed ownership (top contributor 9% of recent commits)
Show all 7 evidence items →Show less
- ✓Apache-2.0 licensed
- ✓CI configured
- ✓Tests present
- ⚠Slowing — last commit 9mo ago
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/nerfstudio-project/nerfstudio)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/nerfstudio-project/nerfstudio on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: nerfstudio-project/nerfstudio
Generated by RepoPilot · 2026-05-07 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/nerfstudio-project/nerfstudio shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- Last commit 9mo ago
- 49+ active contributors
- Distributed ownership (top contributor 9% of recent commits)
- Apache-2.0 licensed
- CI configured
- Tests present
- ⚠ Slowing — last commit 9mo ago
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live nerfstudio-project/nerfstudio
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/nerfstudio-project/nerfstudio.
What it runs against: a local clone of nerfstudio-project/nerfstudio — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in nerfstudio-project/nerfstudio | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 312 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of nerfstudio-project/nerfstudio. If you don't
# have one yet, run these first:
#
# git clone https://github.com/nerfstudio-project/nerfstudio.git
# cd nerfstudio
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of nerfstudio-project/nerfstudio and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "nerfstudio-project/nerfstudio(\\.git)?\\b" \\
&& ok "origin remote is nerfstudio-project/nerfstudio" \\
|| miss "origin remote is not nerfstudio-project/nerfstudio (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "nerfstudio/pipelines/pipeline.py" \\
&& ok "nerfstudio/pipelines/pipeline.py" \\
|| miss "missing critical file: nerfstudio/pipelines/pipeline.py"
test -f "nerfstudio/data/dataparsers/base_dataparser.py" \\
&& ok "nerfstudio/data/dataparsers/base_dataparser.py" \\
|| miss "missing critical file: nerfstudio/data/dataparsers/base_dataparser.py"
test -f "nerfstudio/fields/field.py" \\
&& ok "nerfstudio/fields/field.py" \\
|| miss "missing critical file: nerfstudio/fields/field.py"
test -f "nerfstudio/models/base_model.py" \\
&& ok "nerfstudio/models/base_model.py" \\
|| miss "missing critical file: nerfstudio/models/base_model.py"
test -f "nerfstudio/data/datamanagers/base_datamanager.py" \\
&& ok "nerfstudio/data/datamanagers/base_datamanager.py" \\
|| miss "missing critical file: nerfstudio/data/datamanagers/base_datamanager.py"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 312 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~282d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/nerfstudio-project/nerfstudio"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Nerfstudio is an open-source framework for training, visualizing, and experimenting with Neural Radiance Fields (NeRFs) through a plug-and-play API. It provides end-to-end pipelines for 3D scene reconstruction from 2D images, with support for multiple NeRF variants (Mipnerf, Instant-NGP, Nerfacto) and real-time interactive viewer built on Three.js. The core value is dramatically simplifying NeRF research and deployment compared to reference implementations. Monolithic Python package structure under nerfstudio/ with modular subsystems: datamanagers/ handles data loading and sampling, dataparsers/ supports multiple camera formats (Colmap, Polycam, etc.), fields/ contains NeRF representations, pipelines/ orchestrates training workflows. Frontend viewer is separate React/TypeScript app in viewer/ directory using Redux for state, Three.js for 3D rendering, and socket.io for training server communication. Training uses PyTorch Lightning.
👥Who it's for
Computer vision researchers and practitioners building or experimenting with NeRF-based 3D reconstruction; ML engineers integrating neural scene representations into production pipelines; students learning volumetric rendering; teams collaborating on 3D capture and reconstruction projects who need a modular, extensible framework rather than monolithic implementations.
🌱Maturity & risk
Production-ready and actively maintained. The project has comprehensive CI (core_code_checks.yml, doc.yml, publish.yml workflows), extensive Python codebase (2M+ LOC), detailed documentation structure (docs/ with developer guides), and PyPI distribution. Large Discord community and regular releases indicate sustained development momentum.
Low-to-moderate risk: dependency tree is substantial (torch, tinycuda_nn, torchvision, scipy, matplotlib, tensorboard, etc.) with some heavy ML/GPU dependencies that may have compatibility issues across CUDA versions. GPU requirement (CUDA/RTX) limits accessibility. The monorepo spans multiple feature domains (training, data parsing, 3D visualization) which increases change surface area, but strong CI pipeline mitigates regression risk.
Active areas of work
Active development on NeRF variants and dataloader improvements. The repository maintains dev container setup (.devcontainer/docker-compose.yml), has pre-commit hooks configured, and uses GitHub Actions for continuous validation. Recent focus areas inferred from structure: supporting additional camera intrinsics formats, expanding dataparser plugins, and enhancing the viewer's real-time rendering performance.
🚀Get running
git clone https://github.com/nerfstudio-project/nerfstudio.git && cd nerfstudio && pip install -e . (development install). For the viewer: cd viewer && npm install && npm start. For training: ns-train nerfacto --help (command-line CLI provided). GPU/CUDA setup required; use .devcontainer/docker-compose.yml for containerized setup.
Daily commands: Training: ns-train nerfacto --data <path-to-images> (trains default Nerfacto method). Viewer (dev): cd viewer && npm start (launches React dev server on :3000, connects to training process via socket.io). Production viewer: npm run build && serve -s build. See docs/developer_guides/config.md for advanced configuration.
🗺️Map of the codebase
nerfstudio/pipelines/pipeline.py— Core pipeline abstraction that orchestrates the training loop, data management, and model rendering—every method implementation depends on this interfacenerfstudio/data/dataparsers/base_dataparser.py— Base class for all data parsers; defines how scene metadata and camera poses are loaded from disk before training beginsnerfstudio/fields/field.py— Core field abstraction that represents the neural network density and color functions; fundamental to all NeRF variantsnerfstudio/models/base_model.py— Base model class that ties together fields, renderers, and loss functions; required entry point for all training methodsnerfstudio/data/datamanagers/base_datamanager.py— Abstract data manager that handles batch sampling, ray generation, and GPU memory management during trainingnerfstudio/cameras/camera_utils.py— Camera pose and intrinsic utilities used throughout the codebase for ray casting and rendering.github/workflows/core_code_checks.yml— CI/CD pipeline that validates all pull requests; understanding test expectations is essential before submitting changes
🧩Components & responsibilities
- DataParser (Python, NumPy, OpenCV) — Extracts scene metadata (camera poses, image filepaths, intrinsics) from raw dataset directory and normalizes to canonical coordinate frame
- Failure mode: Incorrect pose parsing → training divergence; mismatched image/pose counts → runtime crash
- DataManager (PyTorch, CUDA) — Manages GPU memory by batching rays, caching preprocessed images, and implementing rejection sampling or importance weighting strategies
- Failure mode: Insufficient GPU memory → OOM; biased batch sampling → training mode collapse
- Field Network (PyTorch nn.Module) — Maps ray samples (position, direction, embedded features) to density and RGB color via MLP or other implicit function approximator
- Failure mode: Poor positional encoding → aliasing artifacts; NaN outputs → training crash
- Renderer — undefined
🛠️How to make changes
Add a New Training Method
- Create a new field class by inheriting from nerfstudio/fields/field.py and implementing the forward() method for density and color (
nerfstudio/fields/field.py) - Create a new model by inheriting from nerfstudio/models/base_model.py and composing your field with a renderer and loss functions (
nerfstudio/models/base_model.py) - Create a new pipeline config in nerfstudio/configs/ or as a method definition, then register it using the @dataclass and method registry decorators (
docs/developer_guides/new_methods.md) - Add unit tests in tests/ that validate your field forward pass and model training step (
.github/workflows/core_code_checks.yml)
Add a New Data Parser
- Inherit from nerfstudio/data/dataparsers/base_dataparser.py and implement _generate_dataparser_outputs() to extract camera poses, image paths, and intrinsics (
nerfstudio/data/dataparsers/base_dataparser.py) - Use nerfstudio/cameras/camera_utils.py to normalize poses and construct camera objects with consistent coordinate conventions (
nerfstudio/cameras/camera_utils.py) - Create a corresponding DataManager (or reuse an existing one) and register your parser with @dataclass in the nerfstudio/configs/ registry (
docs/developer_guides/pipelines/dataparsers.md)
Extend the Viewer with Custom Controls
- Define a custom parameter in your model's get_param_groups() or viewer state, using dataclass fields that map to UI controls (
nerfstudio/models/base_model.py) - The viewer automatically introspects these fields and renders controls; customize appearance by reading docs/developer_guides/viewer/custom_gui.md (
docs/developer_guides/viewer/custom_gui.md) - Hook into WebSocket message handlers in nerfstudio/viewer/ to synchronize viewer state with your model in real time (
nerfstudio/viewer)
🔧Why these technologies
- PyTorch — Standard deep learning framework with strong CUDA support and gradient computation for training NeRF networks
- Dataclasses (Python) — Config composition and method registration without heavy dependency; enables reproducible experiments and CLI generation
- Three.js + WebSocket (Viewer) — Cross-platform interactive 3D visualization with real-time updates from training loop without polling
- Tyro (CLI) — Auto-generates command-line interfaces from dataclass configs, reducing boilerplate for method registration
⚖️Trade-offs already made
-
Dataclass-based config over YAML
- Why: Type safety, IDE autocomplete, and composability; avoids schema drift between config and code
- Consequence: Steeper learning curve for users unfamiliar with Python dataclasses; less declarative than YAML for non-developers
-
Monolithic field/model abstraction over modular component plugins
- Why: Simpler API surface and easier debugging; encourages implementation variants via inheritance
- Consequence: Harder to mix-and-match pre-trained components; requires full model reimplementation for new architectures
-
WebSocket viewer over headless tensorboard-only logging
- Why: Enables interactive debugging and real-time hyperparameter tuning during training
- Consequence: Adds complexity to training loop; browser dependency for visualization
🚫Non-goals (don't propose these)
- Production inference serving (no REST API or model serving framework included)
- Multi-GPU distributed training (single GPU focus with optional multi-machine via external orchestration)
- Deployment to mobile/embedded devices (desktop/server NeRF rendering only)
- Real-time training (<30ms per step); optimizes for convergence quality over speed
🪤Traps & gotchas
CUDA version compatibility is critical—code targets specific tinycuda_nn versions (see setup.py for exact pins). GPU memory requirements are high (typically 8-24GB VRAM depending on method and resolution). The viewer requires live socket.io connection to training process (ws://localhost:6006 by default); network issues silently degrade interactivity. Config system uses tyro which does runtime evaluation of type hints—be careful with complex nested Pydantic models. Data formatting is strict: dataparsers expect exact camera matrix format (4x4 c2w matrices); column-major vs row-major confusion is a frequent trap.
🏗️Architecture
💡Concepts to learn
- Neural Radiance Field (NeRF) — Core concept of the entire framework—understanding how neural networks implicitly represent 3D scenes as continuous functions of position and view direction is essential to understand what nerfstudio enables
- Positional Encoding / Sinusoidal Feature Encoding — Nerfstudio's nerfstudio/fields/base.py extensively uses positional encoding to map low-dimensional coordinates to high-dimensional feature space; fundamental to NeRF expressivity
- Volume Rendering — Ray marching and alpha compositing in nerfstudio/fields/nerf_field.py implements volume rendering equation to convert NeRF outputs (density, color) to pixel values
- Stratified Sampling / Hierarchical Sampling — Nerfstudio implements coarse-fine NeRF architecture requiring intelligent ray sample allocation along depth; critical for rendering quality and speed
- Socket.IO Real-Time Communication — Viewer and training process communicate via socket.io (viewer/src/modules/websocket/); understanding async message passing is needed to modify viewer-trainer interaction
- Multi-resolution Hash Encoding — Instant-NGP integration uses multi-resolution hash grids for efficient feature encoding; nerfstudio's datastructures in fields/ support this modern alternative to sinusoidal encoding
- Redux State Management Pattern — Viewer frontend uses Redux Toolkit for centralized state (viewer/src/modules/store/); understanding action/reducer patterns needed for UI feature development
🔗Related repos
google/jaxnerf— Alternative NeRF implementation in JAX instead of PyTorch; reference implementation influencing nerfstudio's modular designautonomousvision/instant-ngp— NVIDIA's Instant-NGP repository that nerfstudio wraps and integrates as one of its trainable methodsbmild/nerf— Original NeRF paper reference implementation; foundational work that nerfstudio builds uponKITTI-Dataset/devkit— Common 3D dataset used for NeRF training in nerfstudio; dataparsers support KITTI camera formatscolmap/colmap— Structure-from-motion tool that nerfstudio integrates for camera pose estimation; its output format is primary dataparser input
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add GitHub Action workflow to validate and lint the viewer React application
The repo has a .github/workflows/core_code_checks.yml for Python code and a .pre-commit-config.yaml for general checks, but there's no dedicated CI workflow for the viewer React/JavaScript code. The viewer package.json shows eslint is configured but not automatically enforced in CI. A new workflow would catch linting errors, test failures, and build issues in the viewer before merging, ensuring code quality in the web interface that end-users interact with.
- [ ] Create
.github/workflows/viewer_checks.ymlthat runs on PR/push to main - [ ] Add job to run
npm lintin the viewer directory to enforce ESLint rules - [ ] Add job to run
npm run buildto ensure the React app builds successfully - [ ] Add job to run
npm testwith CI flag to validate unit tests in the viewer - [ ] Document the new workflow in CONTRIBUTING.md or README if contribution guidelines exist
Add unit tests for viewer Redux state management and socket.io communication
The viewer uses Redux (@reduxjs/toolkit, react-redux) and socket.io-client for real-time communication with the backend, but there's no test directory visible in the provided structure. Given the complexity of state management and socket events in a collaborative NeRF viewer, adding tests for Redux reducers/actions and socket.io handlers would prevent regressions and make the viewer more maintainable. This directly impacts the stability of the web UI.
- [ ] Create
viewer/src/__tests__/directory structure for Redux state tests - [ ] Add unit tests for Redux slices (actions, reducers) used in the viewer (e.g., camera state, scene controls)
- [ ] Add unit tests for socket.io event handlers to verify correct state updates on incoming messages
- [ ] Use Jest and React Testing Library (already in devDependencies) for consistent testing
- [ ] Ensure new tests are included in
.github/workflows/viewer_checks.yml(from PR #1)
Add developer guide documenting the viewer architecture and socket.io protocol
The repo has comprehensive developer guides in docs/developer_guides/ for pipelines, fields, models, and datamanagers, but there's no equivalent documentation for the viewer/UI layer. Given that the viewer is a 'collaboration friendly studio' with WebSocket communication, datepickers, real-time camera controls, and complex Redux state management, new contributors need clear documentation of: (1) how the viewer connects to the backend, (2) the socket.io message protocol, (3) Redux state structure, (4) how to add new UI components. This will reduce contributor friction.
- [ ] Create
docs/developer_guides/viewer_architecture.mddocumenting the React/Redux/Three.js stack - [ ] Document the socket.io event protocol (e.g., how the viewer sends camera updates and receives scene data)
- [ ] Provide a diagram of Redux state structure and data flow (similar to the pipeline diagrams in
docs/developer_guides/pipelines/imgs/) - [ ] Add section on how to add new controls/panels in the UI using Leva or dat.gui
- [ ] Include setup instructions for running the viewer locally and connecting to a NeRF training session
- [ ] Link to this guide from the main README and developer guide index
🌿Good first issues
- Add integration tests for all dataparser plugins (colmap, polycam, etc.) in nerfstudio/data/dataparsers/tests/—currently limited test coverage for dataloader correctness across different input formats
- Implement missing documentation examples in docs/developer_guides/pipelines/—add end-to-end runnable tutorial showing how to create custom Field subclass with concrete code and expected output
- Add viewer performance profiling dashboard (React component using Three.js stats.js integration) to measure FPS and identify rendering bottlenecks in real-time during training
⭐Top contributors
Click to expand
Top contributors
- @KevinXu02 — 9 commits
- @jb-ye — 9 commits
- @brentyi — 7 commits
- @nepfaff — 7 commits
- @AntonioMacaronio — 6 commits
📝Recent commits
Click to expand
Recent commits
50e0e3c— Add image tiling option to ColmapDataParser to use less memory (#3685) (couperle)04a6cb6— Add documentation for LiveScene (#3577) (Tavish9)83fe2f6— fix core tests (#3687) (f-dy)2adcc38— Fix Viser reference (#3654) (emmanuel-ferdman)5003d0e— sample pixels from all data in variable-resolution batches (#2772) (decrispell)2091a0d— Removing unnecessary print statement (#3639) (AntonioMacaronio)94357f8— Dataloading Followup (#3604) (AntonioMacaronio)54b127f— Fixing bounding box issue when exporting poisson mesh (#3591) (HeaIn)047100d— Allow up to v5 of protobuf (#3580) (klimermana)352cae6— Updating github actions cache version (#3624) (AntonioMacaronio)
🔒Security observations
- High · Outdated and Vulnerable Dependencies in Frontend —
package.json (viewer dependencies). Multiple npm dependencies in package.json contain known vulnerabilities. Notable issues include: socket.io-client@4.5.1 (has XSS vulnerabilities), three@0.142.0 (outdated), and msgpack-lite@0.1.26 (unmaintained with security issues). The react-scripts dependency version is incomplete/cut off in the provided manifest. Fix: Update all dependencies to latest stable versions. Run 'npm audit' and resolve all identified vulnerabilities. Replace msgpack-lite with msgpackr (already present) or another actively maintained alternative. Use npm audit fix and implement automated dependency scanning in CI/CD. - High · Missing Security Headers and CORS Configuration —
Viewer application (package.json, missing security config files). The viewer application (React-based frontend) lacks visible security headers configuration (Content-Security-Policy, X-Frame-Options, etc.) and socket.io-client communication appears unconfigured for secure transport. No CORS restrictions visible in package structure. Fix: Implement comprehensive security headers via middleware. Configure socket.io with secure/CORS options. Add Content-Security-Policy headers. Implement rate limiting for socket.io connections. - Medium · Incomplete Dockerfile Security Configuration —
Dockerfile (builder stage). The Dockerfile uses nvidia/cuda base image but the apt-get install section is incomplete/truncated. Missing '--no-cache' for apt-get and no explicit removal of unnecessary packages. No non-root user creation visible, and layers are not optimized for minimal attack surface. Fix: Complete the RUN instruction. Add 'apt-get clean && rm -rf /var/lib/apt/lists/*' after package installation. Create a non-root user for runtime. Use multi-stage builds to minimize final image size. Scan final image with Trivy or similar tools. - Medium · Conditional Source Copying Logic in Dockerfile —
Dockerfile (source_copy, source_no_copy, source stages). The Dockerfile uses conditional ONBUILD directives based on NERFSTUDIO_VERSION argument. If version is empty, it clones from git; otherwise copies from local context. This creates inconsistent security contexts - git clone vs. local source. No verification (GPG signatures, commit hashes) for git operations. Fix: Implement explicit git commit hash pinning instead of branch names. Verify git commits with GPG signatures. Document and enforce specific version requirements. Remove conditional logic in favor of explicit, reviewable build paths. - Medium · Exposed Build Arguments and Environment Variables —
Dockerfile (ARG and ENV declarations). Docker build arguments like NVIDIA_CUDA_VERSION, CUDA_ARCHITECTURES, and NERFSTUDIO_VERSION are not validated. No input sanitization visible. DEBIAN_FRONTEND and QT_XCB_GL_INTEGRATION environment variables set globally without scope limitation. Fix: Validate and whitelist build arguments at image build time. Use ARG scoping to limit environment variable exposure. Implement build argument validation in a shell script. Document all expected argument values. - Medium · Uncontrolled External Dependencies in Git Clone —
Dockerfile (source_no_copy stage). The Dockerfile's source_no_copy stage performs recursive git clone of nerfstudio repository without hash verification or submodule integrity checks. Potential for supply chain attacks via compromised submodules. Fix: Pin submodule commits explicitly in .gitmodules. Implement git verify-commit validation. Use --no-recurse-submodules and explicitly initialize trusted submodules. Consider vendor/lock files approach for dependencies. - Low · Missing Security Policy and Vulnerability Reporting Mechanism —
Repository root (missing SECURITY.md). No SECURITY.md file visible in root directory. While GitHub issues/workflows exist, there's no explicit security reporting policy or responsible disclosure guidelines documented. Fix: Create SECURITY.md with responsible disclosure policy. Include security contact email and PGP key if applicable. Reference vulnerability reporting in main README. Add security scanning to GitHub Actions workflows. - Low · Pre-commit Configuration Without Security Hooks —
undefined. The .pre-commit-config.yaml exists but no content provided. Standard practice would be to include security scanning hooks (bandit for Python, npm audit, etc.) but these Fix: undefined
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.