RepoPilotOpen in app →

NVIDIA/TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

Healthy

Healthy across the board

Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 4w ago
  • 25+ active contributors
  • Distributed ownership (top contributor 19% of recent commits)
Show 3 more →
  • Apache-2.0 licensed
  • CI configured
  • Tests present

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/nvidia/tensorrt)](https://repopilot.app/r/nvidia/tensorrt)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/nvidia/tensorrt on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: NVIDIA/TensorRT

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/NVIDIA/TensorRT shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit 4w ago
  • 25+ active contributors
  • Distributed ownership (top contributor 19% of recent commits)
  • Apache-2.0 licensed
  • CI configured
  • Tests present

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live NVIDIA/TensorRT repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/NVIDIA/TensorRT.

What it runs against: a local clone of NVIDIA/TensorRT — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in NVIDIA/TensorRT | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 56 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>NVIDIA/TensorRT</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of NVIDIA/TensorRT. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/NVIDIA/TensorRT.git
#   cd TensorRT
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of NVIDIA/TensorRT and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "NVIDIA/TensorRT(\\.git)?\\b" \\
  && ok "origin remote is NVIDIA/TensorRT" \\
  || miss "origin remote is not NVIDIA/TensorRT (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "CMakeLists.txt" \\
  && ok "CMakeLists.txt" \\
  || miss "missing critical file: CMakeLists.txt"
test -f "CONTRIBUTING.md" \\
  && ok "CONTRIBUTING.md" \\
  || miss "missing critical file: CONTRIBUTING.md"
test -f "CODING-GUIDELINES.md" \\
  && ok "CODING-GUIDELINES.md" \\
  || miss "missing critical file: CODING-GUIDELINES.md"
test -f "VERSION" \\
  && ok "VERSION" \\
  || miss "missing critical file: VERSION"
test -f "README.md" \\
  && ok "README.md" \\
  || miss "missing critical file: README.md"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 56 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~26d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/NVIDIA/TensorRT"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

NVIDIA TensorRT is an SDK for optimizing and deploying deep learning models on NVIDIA GPUs with high-performance inference. It takes trained neural networks (from frameworks like PyTorch, TensorFlow via ONNX), applies graph optimization, kernel auto-tuning, and precision quantization (INT8, FP16), then compiles them into efficient GPU engine binaries for production inference with minimal latency and memory footprint. Monorepo structure: core TensorRT C++/CUDA inference library in root, ONNX parser plugin under implicit integration, open-source plugins in subdirectories, demo applications in demo/ (including BERT example in demo/BERT/), CMake-driven multi-platform builds via cmake/ and cmake/toolchains/. Python bindings and utilities in separate layers. CI workflows in .github/workflows/ run on multiple platform combinations.

👥Who it's for

ML engineers and inference platform developers who need to deploy trained deep learning models to production on NVIDIA GPUs with strict latency/throughput requirements. Also used by framework maintainers (ONNX Runtime, PyTorch) who integrate TensorRT as a backend execution provider, and embedded/edge engineers deploying models to Jetson devices.

🌱Maturity & risk

Production-ready and actively maintained by NVIDIA. The codebase is mature (established project with versioning tracked in VERSION and CHANGELOG.md), has comprehensive CI/CD via GitHub Actions (.github/workflows/), and the README announces TensorRT 11.0 coming in Q2 2026 with planned API migrations. Thousands of commits in C++/CUDA, indicating sustained development.

Breaking changes are intentional and documented: TensorRT 11.0 will remove IPluginV2 (replace with IPluginV3), weakly-typed networks (replace with strongly-typed), implicit quantization (replace with explicit), and Python 3.9 support. Build complexity is high due to multiple CMake toolchains (cmake/toolchains/) for cross-compilation targets (aarch64, Android, QNX, Windows). Dependency on NCCL, cuDNN, and CUDA means build environment setup is non-trivial and version-specific.

Active areas of work

Active migration path to TensorRT 11.0: README explicitly calls for migration from IPluginV2→IPluginV3, implicit→explicit quantization, and weakly→strongly-typed networks. Deprecated features (TREX tool) are being replaced (Nsight Deep Learning Designer). Python 3.9 support ending and RPM packages now require Python 3.12. Ongoing optimization and bug-fix releases in the 10.x series.

🚀Get running

Clone the repo: git clone https://github.com/NVIDIA/TensorRT.git && cd TensorRT. Install dependencies from requirements.txt (torch==1.11.0+cu113, transformers==4.18.0, onnxruntime-gpu, polygraphy). Build with CMake: mkdir build && cd build && cmake .. && make -j$(nproc). Build system is CMake (CMakeLists.txt in root), not autotools or npm.

Daily commands: For C++ development: cd build && cmake .. -DCMAKE_BUILD_TYPE=Release && make -j$(nproc) && ctest. For Python (e.g., BERT demo): cd demo/BERT && python builder.py (requires model weights and calibration data). Tools like polygraphy are invoked via Python CLI after installation. No single 'dev server'; this is a compiled library, not a web service.

🗺️Map of the codebase

  • CMakeLists.txt — Root build configuration for the entire TensorRT project; defines compilation flags, dependencies, and platform-specific toolchains that affect all C++ and CUDA components.
  • CONTRIBUTING.md — Defines contribution guidelines, code style standards, and review process that all developers must follow when submitting pull requests.
  • CODING-GUIDELINES.md — Establishes C++ coding conventions, naming standards, and architectural patterns enforced across the codebase.
  • VERSION — Declares the current TensorRT version and is referenced throughout the build system and documentation generation.
  • README.md — Entry point for understanding TensorRT's purpose, roadmap (TensorRT 11.0 breaking changes), and high-level architectural goals.
  • .github/CODEOWNERS — Defines code ownership and review responsibilities for different subsystems, critical for knowing who reviews what.

🛠️How to make changes

Add a new model conversion demo (e.g., new transformer architecture)

  1. Create new demo directory under demo/YourModel/ (demo/BERT/)
  2. Write PyTorch-to-ONNX conversion script (model_to_onnx.py), following pattern from DeBERTa (demo/DeBERTa/deberta_pytorch2onnx.py)
  3. Create builder.py to load ONNX and construct TensorRT engine with optimization passes (demo/BERT/builder.py)
  4. Implement inference.py with dataloader and performance benchmarking (demo/BERT/inference.py)
  5. Add calibration helper (helpers/calibrator.py pattern) if INT8 quantization is needed (demo/BERT/helpers/calibrator.py)
  6. Create README.md documenting model setup, build commands, and expected performance (demo/BERT/README.md)

Add support for a new GPU platform (cross-compilation target)

  1. Create toolchain file in cmake/toolchains/ named cmake_<arch>.toolchain (e.g., cmake_riscv64.toolchain) (cmake/toolchains/cmake_aarch64.toolchain)
  2. Define compiler, CUDA architecture flags, and C++ standard in the new toolchain (cmake/toolchains/cmake_x86_64.toolchain)
  3. Update cmake/modules/Platforms.cmake to detect and configure flags for the new platform (cmake/modules/Platforms.cmake)
  4. Add platform-specific CI workflow in .github/workflows/ to validate compilation (.github/workflows/blossom-ci.yml)
  5. Document the new platform in README.md and CONTRIBUTING.md (README.md)

Prepare codebase for TensorRT 11.0 (migrate to Strongly-Typed Networks)

  1. Review CHANGELOG.md for breaking changes in TensorRT 11.0 (removal of weakly-typed APIs, implicit quantization) (CHANGELOG.md)
  2. Audit demo code: replace NetworkDefinition weakly-typed layer creation with strongly-typed equivalents (demo/BERT/builder.py)
  3. Migrate INT8 calibration from implicit quantization (builder->setFlag()) to explicit quantization (QDQ nodes) (demo/BERT/helpers/calibrator.py)
  4. Update plugin implementations to use IPluginV3 (new plugin interface replacing IPluginV2) (demo/BERT/infer_c/infer_c.cpp)
  5. Document migration patterns in new MIGRATION-TO-11.0.md file and reference in CONTRIBUTING.md (CONTRIBUTING.md)

Add a new inference optimization technique (e.g., sparsity, quantization-aware training)

  1. Create new builder variant (builder_sparse.py) inheriting from base builder pattern (demo/BERT/builder.py)
  2. Implement optimization in helpers/ module (helpers/sparse_processor.py) with calibration integration (demo/BERT/helpers/calibrator.py)
  3. Add demo inference script showing before/after performance metrics (demo/BERT/inference.py)
  4. Benchmark with perf.py script and document results in README or Jupyter notebook (demo/BERT/perf.py)

🔧Why these technologies

  • CUDA & cuDNN — Direct GPU kernel access for maximum inference throughput; NVIDIA's native optimization libraries for tensor operations.
  • CMake (multi-platform build) — Cross-compilation support for x86_64, ARM (aarch64), QNX, Android, Windows; one build system for all platforms.
  • ONNX as interchange format — Framework-agnostic model representation; enables PyTorch/TensorFlow → TensorRT conversion without vendor lock-in on the model side.

🪤Traps & gotchas

CUDA/cuDNN/NCCL versions must match your GPU driver and TensorRT version (no auto-negotiation). CMake toolchain selection is critical: using cmake_x86_64.toolchain on aarch64 host will silently cross-compile to wrong target. Python bindings require exact Python version (3.10+ after 3.9 deprecation in TensorRT 10.16); mismatches cause import failures. Demo BERT example expects model weights to be pre-downloaded; builder.py will fail cryptically without them. Polygraphy tool is a separate install dependency (in requirements.txt) but not automatically added to PATH. IPluginV2 plugins will break in TensorRT 11.0; migration to IPluginV3 is required and non-trivial.

🏗️Architecture

💡Concepts to learn

  • Graph Optimization and Kernel Fusion — TensorRT's core advantage: it fuses adjacent GPU kernels (e.g., Conv+ReLU+Concat) into single custom kernels to minimize memory traffic and latency, which naive layer-by-layer inference cannot achieve
  • Quantization (INT8, FP16, Mixed-Precision) — TensorRT supports explicit and implicit quantization (though implicit is deprecated in 11.0); understanding precision tradeoffs (accuracy vs speed/memory) is critical for production models on embedded devices
  • Strongly-Typed Networks vs Weakly-Typed — Breaking change in TensorRT 11.0: strongly-typed networks enforce data types at build time (safer, faster), while legacy weakly-typed deferred type resolution; new code must use strongly-typed API
  • IPluginV2 vs IPluginV3 Plugin Architecture — TensorRT 11.0 removes IPluginV2; all custom layer plugins must migrate to IPluginV3 which has stricter typing, better batching support, and explicit memory management
  • Calibration and Quantization-Aware Training (QAT) — demo/BERT/helpers/calibrator.py shows how TensorRT quantizes models: calibration runs representative data to compute optimal INT8 scaling factors; understanding this pipeline is essential for production INT8 models
  • Auto-Tuning and Kernel Selection — TensorRT auto-selects fastest kernel implementations for your specific GPU at build time (not runtime); build time can be minutes for large models because of this exhaustive search
  • ONNX Parser and Operator Coverage — TensorRT's ONNX parser (onnx-tensorrt plugin) does not support all ONNX operators; unsupported ops fail at build time; understanding operator mapping is critical for model compatibility
  • onnx/onnx — ONNX is the model exchange format TensorRT parses; understanding ONNX IR is essential for using TensorRT's parser plugin
  • pytorch/pytorch — PyTorch models are converted to ONNX then fed to TensorRT; PyTorch's export_onnx() is the upstream step in the inference pipeline
  • onnx/onnx-tensorrt — This is the ONNX-to-TensorRT converter plugin (part of TensorRT-OSS); implements the ONNX parser that TensorRT uses
  • NVIDIA/TensorRT-LLM — Companion repo optimizing large language models specifically with TensorRT; shows advanced quantization and batching patterns for production LLM inference
  • NVIDIA/Triton-Inference-Server — Production deployment framework that uses TensorRT as one backend engine for multi-model serving; typical customer wraps TensorRT engines in Triton

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive unit tests for cmake/modules directory

The cmake/modules directory contains critical build infrastructure (FindNCCL.cmake, ImportDL.cmake, ShouldCompileKernel.cmake, etc.) that lacks visible test coverage. These modules are essential for cross-platform builds (x86_64, aarch64, Android, QNX, Windows) but have no corresponding test files. Adding CMake unit tests would catch regressions in build logic across different toolchains and platforms, improving reliability for downstream users.

  • [ ] Create tests/cmake/ directory structure mirroring cmake/modules/
  • [ ] Add unit tests for FindNCCL.cmake covering different NCCL installation scenarios
  • [ ] Add unit tests for Platforms.cmake verifying toolchain detection for all supported architectures (aarch64, x86_64, ppc64le, qnx, android, windows)
  • [ ] Add unit tests for ShouldCompileKernel.cmake testing CUDA compute capability matching logic
  • [ ] Integrate tests into CI pipeline via .github/workflows/ (extend existing docker-image.yml or blossom-ci.yml)
  • [ ] Document test execution in CONTRIBUTING.md

Add missing CI workflow for demo applications (BERT demo validation)

The demo/BERT directory contains multiple inference implementations (Python, C++, variable sequence length variants) that should be validated in CI. Currently, only docker-image.yml and blossom-ci.yml exist, but there's no specific workflow validating that demo code actually builds and runs. This prevents regressions in end-to-end inference examples that users depend on for getting started with TensorRT.

  • [ ] Create .github/workflows/demo-build-test.yml workflow
  • [ ] Add build steps for demo/BERT/infer_c/infer_c.cpp using CMakeLists.txt
  • [ ] Add validation that demo/BERT Python scripts can import required dependencies (torch, transformers, onnxruntime-gpu from requirements.txt)
  • [ ] Include basic model inference smoke test for BERT demo using cached small model
  • [ ] Ensure workflow runs on both x86_64 Linux and validates GPU accessibility
  • [ ] Document demo testing requirements in demo/BERT/README.md

Create integration tests for TensorRT 11.0 migration path (Strongly Typed Networks)

The README announces that TensorRT 11.0 will remove weakly-typed networks and implicit quantization APIs. Currently, there's no validation suite helping contributors and users identify deprecated API usage patterns. Creating a migration test suite would document the new Strongly Typed Networks API, provide code examples, and catch accidental usage of deprecated patterns in new contributions.

  • [ ] Create tests/migration/ directory for TensorRT version compatibility tests
  • [ ] Add tests/migration/test_strongly_typed_networks.cpp demonstrating new API patterns vs. old weakly-typed patterns
  • [ ] Add tests/migration/test_implicit_quantization_deprecation.cpp verifying detection of deprecated quantization APIs
  • [ ] Create migration guide document at documents/migration_to_tensorrt_11.md with code examples
  • [ ] Add pre-commit hook or linting rule to flag usage of deprecated API calls (e.g., scanning for weakly-typed setType() calls)
  • [ ] Reference migration path in CONTRIBUTING.md for new contributors

🌿Good first issues

  • Add unit tests for cmake/modules/FindNCCL.cmake and cmake/modules/ImportDL.cmake — currently no test harness validates correct NCCL version detection on different platforms, leading to silent linking errors
  • Document the IPluginV3 migration path with a worked example: create a simple demo/plugins/simple_plugin_v3/ that mirrors demo/BERT/ structure but uses new IPluginV3 API with CMake integration guide
  • Add clang-format CI check to .github/workflows/ (currently .clang-format exists but no automated enforcement); create a GitHub Action that runs clang-format and fails PR if formatting violated, reducing review friction

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 5302b28 — 10.16.1 update (#4737) (kevinch-nv)
  • 52399f5 — TensorRT 10.16 OSS Release (#4729) (kevinch-nv)
  • aa76a58 — Add an additional include path for safety headers (#4716) (#4717) (kevinch-nv)
  • 0da1458 — bugfix: include pyproject.toml in demoDiffusion (#4707) (asfiyab-nvidia)
  • bdafad3 — Add UNIFIED_BUILDER code for safety samples (#4704) (#4706) (kevinch-nv)
  • 73b9168 — Fix sampleCudla, update cmake toolchains, improve docs (#4696) (#4699) (poweiw)
  • 40c8707 — Add 2026 Roadmap and TRT 11 annoucements (#4695) (kevinch-nv)
  • d6d490c — Update samplecuDLA header (#4693) (kevinch-nv)
  • 9973b2f — TensorRT 10.15 OSS Release (#4692) (kevinch-nv)
  • 0c80196 — Merge pull request #4688 from yuanyao-nv/dev-revert-gs-change (yuanyao-nv)

🔒Security observations

The NVIDIA TensorRT repository shows moderate security concerns, primarily centered on outdated dependencies with known vulnerabilities. PyTorch 1.11.0 and Transformers 4.18.0 are significantly behind current versions and likely contain unpatched CVEs. The use of custom package indices introduces supply chain risks, and version constraints are inconsistently applied. While the project demonstrates good governance practices (Apache 2.0 license, CODEOWNERS file), security automation in CI/CD and formal security policies are not clearly visible. Immediate remediation should focus on updating to current, maintained versions of core dependencies and implementing automated security scanning in the build pipeline.

  • High · Outdated PyTorch with Known Vulnerabilities — dependencies/requirements.txt (or similar dependency file). The dependencies file specifies torch==1.11.0+cu113, which is significantly outdated (released in 2022). This version may contain known security vulnerabilities that have been patched in newer releases. PyTorch 1.11.0 is no longer actively maintained. Fix: Update PyTorch to the latest stable version (2.x series). Run pip-audit or check PyTorch security advisories for specific CVEs affecting 1.11.0 before upgrading.
  • High · Outdated Transformers Library — dependencies/requirements.txt (or similar dependency file). The transformers==4.18.0 dependency is pinned to an older version from 2022. Newer versions contain security patches and bug fixes. This version is no longer actively maintained by Hugging Face. Fix: Update to transformers>=4.35.0 or the latest stable version. Review changelog for security fixes before upgrading.
  • Medium · Unspecified Version Constraint on onnxruntime-gpu — dependencies/requirements.txt (or similar dependency file). The dependency specification 'onnxruntime-gpu>=1.12' uses only a minimum version constraint without an upper bound. This allows for installation of any version >=1.12, which could introduce breaking changes or vulnerabilities in newer major versions. Fix: Use semantic versioning with upper bound constraints: 'onnxruntime-gpu>=1.12,<2.0' to ensure compatibility while allowing patch updates.
  • Medium · Custom Package Index Without Security Verification — dependencies/requirements.txt (or similar dependency file). The configuration includes '--extra-index-url https://pypi.ngc.nvidia.com' which adds an additional PyPI index. While NVIDIA's official repository is generally trustworthy, this increases the attack surface. Package dependency confusion attacks could exploit this if an attacker publishes a malicious package with the same name on PyPI. Fix: Verify the package sources explicitly. Use '--index-url' instead of '--extra-index-url' if the NVIDIA index should be primary. Consider using a private package repository with authentication for supply chain security.
  • Medium · Missing Package Pinning for Argparse — dependencies/requirements.txt (or similar dependency file). The 'argparse' dependency is listed without any version constraint. Argparse is part of Python standard library in Python 3.2+, and explicit dependency declaration may indicate backport compatibility issues or unnecessarily expose the build to version changes. Fix: If Python 3.2+ is guaranteed, remove 'argparse' dependency. Otherwise, pin to a specific version: 'argparse==1.4.0'.
  • Low · Missing Security Headers in CI/CD Configuration — .github/workflows/. While GitHub Actions workflows are present (.github/workflows/), there is no visible evidence of security scanning (SAST, dependency checking, or secret scanning) in the CI/CD pipeline from the provided file structure. Fix: Implement GitHub's built-in security features: enable Dependabot alerts, add SAST scanning (CodeQL), and implement secret scanning. Add pre-commit hooks to detect credentials.
  • Low · No Security Policy Visible — Repository root. No SECURITY.md file is evident in the repository structure to define responsible disclosure procedures or security contact information. Fix: Create a SECURITY.md file documenting responsible disclosure procedures, security contact information, and supported versions for security updates.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · NVIDIA/TensorRT — RepoPilot