RepoPilotOpen in app →

qubvel-org/segmentation_models.pytorch

Semantic segmentation models with 500+ pretrained convolutional and transformer-based backbones.

Healthy

Healthy across the board

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 3w ago
  • 6 active contributors
  • MIT licensed
Show all 6 evidence items →
  • CI configured
  • Tests present
  • Concentrated ownership — top contributor handles 52% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/qubvel-org/segmentation_models.pytorch)](https://repopilot.app/r/qubvel-org/segmentation_models.pytorch)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/qubvel-org/segmentation_models.pytorch on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: qubvel-org/segmentation_models.pytorch

Generated by RepoPilot · 2026-05-07 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/qubvel-org/segmentation_models.pytorch shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit 3w ago
  • 6 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Concentrated ownership — top contributor handles 52% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live qubvel-org/segmentation_models.pytorch repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/qubvel-org/segmentation_models.pytorch.

What it runs against: a local clone of qubvel-org/segmentation_models.pytorch — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in qubvel-org/segmentation_models.pytorch | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 50 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>qubvel-org/segmentation_models.pytorch</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of qubvel-org/segmentation_models.pytorch. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/qubvel-org/segmentation_models.pytorch.git
#   cd segmentation_models.pytorch
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of qubvel-org/segmentation_models.pytorch and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "qubvel-org/segmentation_models.pytorch(\\.git)?\\b" \\
  && ok "origin remote is qubvel-org/segmentation_models.pytorch" \\
  || miss "origin remote is not qubvel-org/segmentation_models.pytorch (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "segmentation_models_pytorch/__init__.py" \\
  && ok "segmentation_models_pytorch/__init__.py" \\
  || miss "missing critical file: segmentation_models_pytorch/__init__.py"
test -f "segmentation_models_pytorch/base/model.py" \\
  && ok "segmentation_models_pytorch/base/model.py" \\
  || miss "missing critical file: segmentation_models_pytorch/base/model.py"
test -f "segmentation_models_pytorch/decoders/__init__.py" \\
  && ok "segmentation_models_pytorch/decoders/__init__.py" \\
  || miss "missing critical file: segmentation_models_pytorch/decoders/__init__.py"
test -f "segmentation_models_pytorch/base/hub_mixin.py" \\
  && ok "segmentation_models_pytorch/base/hub_mixin.py" \\
  || miss "missing critical file: segmentation_models_pytorch/base/hub_mixin.py"
test -f "pyproject.toml" \\
  && ok "pyproject.toml" \\
  || miss "missing critical file: pyproject.toml"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 50 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~20d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/qubvel-org/segmentation_models.pytorch"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

segmentation_models.pytorch (SMP) is a PyTorch library providing 12+ pre-built encoder-decoder architectures (U-Net, U-Net++, SegFormer, DPT, etc.) with 800+ pretrained backbones for semantic image segmentation. It solves the problem of quickly deploying state-of-the-art segmentation models by abstracting away architecture complexity—users can instantiate a full model in two lines of code without building encoders/decoders from scratch. Monolithic package structure: root contains segmentation_models/ (core library with models/, encoders/, losses/, metrics/), docs/ for Sphinx documentation, examples/ with Jupyter notebooks and Python scripts, requirements/ for dependency groups, and misc/ with code generation utilities (generate_table.py for encoder metadata). No multi-package monorepo; single coherent library.

👥Who it's for

Computer vision engineers and researchers building production segmentation pipelines who need flexible, pretrained models without implementing DeepLab/SegFormer/DPT architectures manually. Contributors are typically folks extending the library with new backbones (especially from timm) or loss/metric functions.

🌱Maturity & risk

Production-ready and actively maintained. The repo has CI/CD via GitHub Actions (tests.yml, pypi.yml), codecov integration, ReadTheDocs docs, 500K+ Python LOC, and semantic versioning via PyPI releases. Last activity visible in workflow configs suggests ongoing releases; the maturity is reinforced by real-world adoption (withoutBG sponsor example in README).

Low-to-moderate risk. Single maintainer (qubvel) could be a concern for long-term support, but the codebase is stable with good test coverage. Dependency risk exists via timm integration (800+ timm encoders = external version churn), and PyTorch 1.9+ requirement may lag behind latest PyTorch APIs. No major red flags from file structure, but breaking changes in timm could ripple through.

Active areas of work

Library is in maintenance/feature-addition mode. Workflows (tests.yml, pypi.yml) are active; codecov.yml and dependabot.yml indicate continuous integration and dependency updates are monitored. The presence of encoders_timm.rst and encoders_dpt.rst in docs suggests recent or ongoing work integrating transformer backbones; generate_table_timm.py hints at automation of timm encoder documentation.

🚀Get running

Clone and install: git clone https://github.com/qubvel-org/segmentation_models.pytorch.git && cd segmentation_models.pytorch && pip install -e . (or pip install segmentation-models-pytorch from PyPI). Install dev dependencies: pip install -r requirements/test.txt for testing, pip install -r requirements/docs.txt for docs. Run tests: make test (Makefile present).

Daily commands: No 'server' in traditional sense—this is a library, not an app. To verify installation, run Python: python -c "import segmentation_models_pytorch as smp; model = smp.Unet('resnet50', in_channels=3, classes=1)". For examples: jupyter notebook examples/binary_segmentation_intro.ipynb or python examples/binary_segmentation_buildings.py. Run full test suite: make test (see Makefile).

🗺️Map of the codebase

  • segmentation_models_pytorch/__init__.py — Main package entry point that exports all public APIs; every user's first import target.
  • segmentation_models_pytorch/base/model.py — Core base model class that all segmentation models inherit from; defines forward pass and initialization patterns.
  • segmentation_models_pytorch/decoders/__init__.py — Exports all 8 decoder architectures (U-Net, DeepLabV3, FPN, LinkNet, etc.); central hub for decoder composition.
  • segmentation_models_pytorch/base/hub_mixin.py — HuggingFace Hub integration for model saving/loading; enables 500+ pretrained model distribution.
  • pyproject.toml — Project metadata, dependencies (PyTorch, timm, transformers), and version; controls build & release.
  • segmentation_models_pytorch/base/heads.py — Classification and segmentation head implementations; final output layer abstraction across all models.
  • segmentation_models_pytorch/base/modules.py — Reusable building blocks (Conv+BN+Act, ASPP, SPP); shared across all decoders and backbones.

🛠️How to make changes

Add a new Semantic Segmentation Decoder

  1. Create decoder directory structure (segmentation_models_pytorch/decoders/mynewdecoder/__init__.py)
  2. Implement decoder class inheriting SegmentationModel pattern with encoder-decoder architecture (segmentation_models_pytorch/decoders/mynewdecoder/model.py)
  3. Implement decoder building blocks (upsampling, fusion ops) following modules.py conventions (segmentation_models_pytorch/decoders/mynewdecoder/decoder.py)
  4. Export new decoder in parent init.py for public API access (segmentation_models_pytorch/decoders/__init__.py)
  5. Add unit tests following test patterns; verify compatibility with ResNet, EfficientNet backbones (tests/test_mynewdecoder.py)
  6. Document supported decoders and update encoder compatibility matrix (docs/models.rst)

Add Support for a New Backbone Encoder

  1. Verify backbone is available in timm or transformers library (no custom training needed) (pyproject.toml)
  2. Use create() factory function with encoder_name parameter; timm/transformers backends auto-load (segmentation_models_pytorch/__init__.py)
  3. Document new encoder in appropriate list (timm, DPT, CNN) with depth/output channels metadata (docs/encoders_timm.rst)
  4. Add integration test to verify encoder works with ≥2 decoders (U-Net, DeepLabV3) (tests/test_encoders.py)

Create and Share a New Pretrained Model via HuggingFace Hub

  1. Train a model using any decoder + encoder combination with your dataset (examples/binary_segmentation_intro.ipynb)
  2. Use push_to_hub() method from hub_mixin to upload model and config to HF Hub (segmentation_models_pytorch/base/hub_mixin.py)
  3. Verify model loads via from_pretrained() from any Hub URL (docs/save_load.rst)
  4. Add model entry to Hall of Fame for community visibility (HALLOFFAME.md)

Add a Custom Loss Function or Metric

  1. Implement loss function or metric compatible with PyTorch standard interfaces (segmentation_models_pytorch/losses.py)
  2. Document loss in Sphinx docs with mathematical definition and usage examples (docs/losses.rst)
  3. Use in training loop alongside model forward pass (examples/binary_segmentation_intro.ipynb)

🔧Why these technologies

  • PyTorch (1.9+) — Primary deep learning framework for model definition, training, and inference; required for nn.Module inheritance
  • timm (PyTorch Image Models) — Provides 300+ pretrained CNN backbones (ResNet, EfficientNet, ViT) without custom implementation
  • transformers (Hugging Face) — Access to 200+ Vision Transformer checkpoints (DINO, MAE, DPT) for transformer-based segmentation
  • HuggingFace Hub (huggingface_hub library) — Enables sharing, version control, and distributed hosting of pretrained segmentation models
  • Sphinx + Read the Docs — Automated documentation generation for 500+ encoder compatibility matrix and API docs

⚖️Trade-offs already made

  • Decoder-agnostic backbone loading via factory pattern (create function)

    • Why: Allows 500+ encoders × 8 decoders without explicit coupling; reduces maintenance burden
    • Consequence: Encoder must expose consistent feature pyramid (4 levels); some exotic backbones may need adapter code
  • Use timm + transformers as external dependencies rather than bundling weights

    • Why: Keeps package size small (~5-10 MB); leverages large communities' maintenance
    • Consequence: First model instantiation downloads 50-500 MB weights from timm/HF Hub; requires internet access

🪤Traps & gotchas

  1. Encoder availability varies by source: some encoders require timm (pip install timm), others come from torchvision (included). Check encoder source in docs/encoders*.rst before assuming availability. 2. Input shape expectations: models expect (B, C, H, W) tensors; image preprocessing (normalization, resizing) is user's responsibility—not baked in. 3. Pretrained weights mismatch: using in_channels != 3 discards pretrained weights (e.g., RGB → grayscale). 4. Version pinning: timm is actively developed; breaking encoder API changes in timm can break downstream code. 5. ONNX export quirks: some complex decoders or custom ops may not ONNX-export cleanly; test before relying on torch.onnx.export().

🏗️Architecture

💡Concepts to learn

  • Encoder-Decoder Architecture for Semantic Segmentation — SMP's core design pattern: encoder (pretrained CNN/Transformer backbone) extracts spatial features, decoder (upsampling + skip connections) reconstructs pixel-level predictions. Understanding this paradigm is essential to choosing models and modifying architectures.
  • Skip Connections in U-Net — SMP models (especially U-Net, U-Net++) rely on skip connections from encoder to decoder to preserve spatial details lost during downsampling; critical for understanding model behavior and memory usage.
  • Atrous (Dilated) Convolutions — Many SMP encoders (especially DeepLab-based backbones) use atrous convolutions to increase receptive field without spatial resolution loss; affects stride, output feature maps, and interpretability.
  • Dice Loss and Tversky Loss — SMP provides Dice and Tversky losses (alternatives to cross-entropy) for handling class imbalance in segmentation; understanding when/why to use them vs. focal loss is important for training performance.
  • Vision Transformers (ViT) and DPT — SMP supports transformer-based encoders (DPT, SegFormer, Vision Transformer) alongside CNNs; these use self-attention instead of convolutions and have different positional encoding schemes—affects training dynamics and memory.
  • Multi-Scale Feature Fusion (FPN Style) — SMP's FPN and SegFormer models fuse multi-scale decoder features to capture objects at different scales; understanding pyramid structure and fusion strategies is key to model selection.
  • ONNX Export and Model Deployment — SMP is ONNX-export friendly; understanding torch.onnx.export() quirks (op limitations, dynamic shapes) and verification is critical for production inference on non-PyTorch runtimes.
  • torchvision/vision — Official PyTorch vision library providing backbone encoders (ResNet, VGG, etc.) that SMP integrates; foundational dependency.
  • huggingface/pytorch-image-models — Timm library providing 500+ modern transformer/CNN backbones (Vision Transformer, Swin, EfficientNet, etc.); SMP wraps timm encoders via timm integration layer.
  • qubvel/segmentation_models — Original Keras/TensorFlow version of segmentation_models; PyTorch version is a feature-parity port with similar API for TF users migrating to PyTorch.
  • rwightman/pytorch-image-models — Same as pytorch-image-models (alternate name/reference); timm is the canonical repo for pretrained image models on PyTorch.
  • facebookresearch/segment-anything — Meta's foundation model for segmentation (SAM); complementary approach to SMP's encoder-decoder paradigm, though not an alternative—both can coexist in pipelines.

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add integration tests for model serialization across different backends (ONNX, TorchScript, HuggingFace Hub)

The repo has examples for converting to ONNX (examples/convert_to_onnx.ipynb) and saving/loading with HF Hub (examples/save_load_model_and_share_with_hf_hub.ipynb), but there's no dedicated test suite in .github/workflows/tests.yml or unit tests verifying these conversions work correctly. This is critical since users rely on these features for production deployment. Currently segmentation_models_pytorch/base/hub_mixin.py exists but lacks corresponding tests.

  • [ ] Create segmentation_models_pytorch/tests/test_model_export.py with tests for ONNX export validation
  • [ ] Add tests for TorchScript serialization and inference consistency
  • [ ] Add tests for HuggingFace Hub mixin (push_to_hub, from_pretrained) in segmentation_models_pytorch/tests/test_hub_integration.py
  • [ ] Update .github/workflows/tests.yml to run export tests (may require optional ONNX dependencies)
  • [ ] Document in docs/save_load.rst any new export test coverage

Add decoder-specific unit tests for each architecture variant

The repo has segmentation_models_pytorch/decoders directory but the file structure shown is incomplete (truncated). There's no evidence of comprehensive tests for each decoder type (Unet, FPN, PSPNet, DeepLabV3, etc.). Given 500+ pretrained backbones can be combined with multiple decoders, missing decoder tests is a significant gap for ensuring correctness across combinations.

  • [ ] Create segmentation_models_pytorch/tests/test_decoders/ directory
  • [ ] Add unit tests for each decoder class in separate files (test_unet_decoder.py, test_fpn_decoder.py, test_pspnet_decoder.py, etc.)
  • [ ] Tests should verify: output shape correctness, skip connection handling, intermediate feature map dimensions
  • [ ] Add parametrized tests that combine different encoders with each decoder to catch incompatibilities
  • [ ] Update requirements/test.txt if new testing dependencies are needed

Add dataset loading and preprocessing validation tests

The repo has segmentation_models_pytorch/datasets/oxford_pet.py but no visible test file validating dataset loaders. Examples exist (examples/binary_segmentation_intro.ipynb, examples/camvid_segmentation_multiclass.ipynb) but lack automated tests. This is important because dataset pipeline bugs directly impact model training results and are hard to debug.

  • [ ] Create segmentation_models_pytorch/tests/test_datasets.py
  • [ ] Add tests verifying: OxfordPet dataset downloads correctly, data splits are correct, image/mask pairs are aligned
  • [ ] Add tests for common preprocessing operations (resizing, normalization, augmentation compatibility)
  • [ ] Test that dataset outputs match expected shapes and value ranges
  • [ ] Add test cases for edge cases (corrupted downloads, missing files, empty datasets)
  • [ ] Document dataset expectations in docs/datasets.rst (currently missing from docs structure)

🌿Good first issues

  • Add unit tests for all loss functions in segmentation_models/losses/. Currently no dedicated test file visible; write parametrized tests in tests/test_losses.py covering Dice, Jaccard, Tversky, Focal with edge cases (all-zeros, all-ones predictions).: Improves robustness and prevents regression when loss APIs change.
  • Document encoder input/output shapes explicitly. Create a table in docs/ showing (input_channels, input_size_min, output_stride, feature_channels) for each of the 500+ encoders; use generate_table*.py scripts as reference to automate population.: Reduces user confusion about architecture compatibility and helps with debugging shape mismatches.
  • Add example Jupyter notebook for multi-task segmentation (semantic + boundary detection) using auxiliary outputs. See docs mentioning auxiliary_output=True but no real example; create examples/multitask_segmentation.ipynb showing how to train and infer with auxiliary heads.: Addresses a real use case and demonstrates an advertised but under-documented feature.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 09b652f — Merge pull request #1326 from qubvel-org/dependabot/pip/requirements/huggingface-hub-1.11.0 (adamjstewart)
  • 32b3729 — Merge pull request #1327 from qubvel-org/dependabot/pip/requirements/ruff-0.15.11 (adamjstewart)
  • 9f3aa59 — Build(deps): Bump ruff from 0.15.10 to 0.15.11 in /requirements (dependabot[bot])
  • 0a573da — Build(deps): Bump huggingface-hub from 1.10.2 to 1.11.0 in /requirements (dependabot[bot])
  • f47671d — Merge pull request #1324 from qubvel-org/dependabot/pip/requirements/packaging-26.1 (adamjstewart)
  • f93d20b — Merge pull request #1325 from qubvel-org/dependabot/pip/requirements/huggingface-hub-1.10.2 (adamjstewart)
  • b6fbd90 — Build(deps): Bump huggingface-hub from 1.10.1 to 1.10.2 in /requirements (dependabot[bot])
  • e104797 — Build(deps): Bump packaging from 26.0 to 26.1 in /requirements (dependabot[bot])
  • 813d5fb — Merge pull request #1322 from qubvel-org/dependabot/github_actions/actions/upload-artifact-7.0.1 (adamjstewart)
  • 6e850f6 — Build(deps): Bump actions/upload-artifact from 7.0.0 to 7.0.1 (dependabot[bot])

🔒Security observations

The segmentation_models.pytorch repository shows a reasonable security posture for a machine learning library, but has room for improvement. Primary concerns include lack of visible dependency version pinning, absence of model file integrity verification mechanisms, and no documented security policy. The codebase appears clean from obvious code injection vulnerabilities given its nature as a ML library focused on model architectures rather than web/system services. Recommendation: Implement strict dependency management, add model checksum verification, establish a security policy, and consider regular security audits of external integrations (Hugging Face Hub, timm).

  • Medium · Missing dependency pinning in requirements files — requirements/ directory. The requirements files (requirements/required.txt, requirements/test.txt, requirements/docs.txt) are not provided for review. Without version pinning or lock files, the project may be vulnerable to installing compromised or breaking versions of dependencies. This is a common supply chain risk vector. Fix: Implement strict version pinning for all dependencies using pip-compile or poetry.lock. Use hash verification and consider implementing dependency scanning with tools like Safety or Snyk.
  • Medium · Potential model file integrity risk — scripts/models-conversions/. The repository includes model conversion scripts (scripts/models-conversions/) that download and process pretrained models. Without visible checksum verification or signature validation, there is a risk of man-in-the-middle attacks or compromised model files being loaded. Fix: Implement cryptographic verification of downloaded models using checksums (SHA256) or digital signatures. Document the expected hash values for all pretrained models.
  • Medium · Dependency on external model hubs without verification — segmentation_models_pytorch/base/hub_mixin.py, examples/. The codebase integrates with Hugging Face Hub (evident from file 'examples/save_load_model_and_share_with_hf_hub.ipynb' and 'base/hub_mixin.py'). Without proper validation, users could inadvertently download malicious models from untrusted sources. Fix: Implement model signature verification and trusted repository allowlisting. Add warnings when loading models from non-official sources. Document security best practices for model downloads.
  • Low · No visible input validation in dataset modules — segmentation_models_pytorch/datasets/oxford_pet.py. The datasets module (oxford_pet.py) likely handles file paths and downloads. Without visible input validation, there could be path traversal or arbitrary file read vulnerabilities. Fix: Implement strict input validation for file paths using os.path.abspath() and verification that paths remain within expected directories. Sanitize all user-provided file paths.
  • Low · Missing SECURITY.md and vulnerability disclosure policy — Repository root. No visible security policy or responsible disclosure guidelines. This makes it difficult for security researchers to report vulnerabilities confidentially. Fix: Create a SECURITY.md file with clear vulnerability disclosure guidelines, contact information, and expected response times.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · qubvel-org/segmentation_models.pytorch — RepoPilot