RepoPilotOpen in app →

sjmoran/deeplpf-image-enhancement

Code for CVPR 2020 paper "Deep Local Parametric Filters for Image Enhancement"

AVOID

Stale and unlicensed — last commit 2y ago

worst-case
Use as dependencyAVOID

no license — legally unclear; last commit was 2y ago…

Fork & modifyAVOID

no license — can't legally use code; no tests detected…

Learn fromGO

Documented and popular — useful reference codebase to read through.

Deploy as-isAVOID

no license — can't legally use code; last commit was 2y ago…

  • 4 active contributors
  • Stale — last commit 2y ago
  • Small team — 4 top contributors
  • Concentrated ownership — top contributor handles 64% of commits
  • No license — legally unclear to depend on
  • No CI workflows detected
  • No test directory detected
  • Scorecard: marked unmaintained (0/10)
  • Scorecard: default branch unprotected (0/10)
What would change the verdict?
  • Use as dependency AVOIDWAIT if: publish a permissive license (MIT, Apache-2.0, etc.)
  • Fork & modify AVOIDWAIT if: add a LICENSE file
  • Deploy as-is AVOIDWAIT if: add a LICENSE file

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests + OpenSSF Scorecard

Embed this verdict

[![RepoPilot: AVOID](https://repopilot.app/api/badge/sjmoran/deeplpf-image-enhancement)](https://repopilot.app/r/sjmoran/deeplpf-image-enhancement)

Paste into your README — the badge live-updates from the latest cached analysis.

Onboarding doc

Onboarding: sjmoran/deeplpf-image-enhancement

Generated by RepoPilot · 2026-05-05 · Source

Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/sjmoran/deeplpf-image-enhancement shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

Verdict

AVOID — Stale and unlicensed — last commit 2y ago

  • 4 active contributors
  • ⚠ Stale — last commit 2y ago
  • ⚠ Small team — 4 top contributors
  • ⚠ Concentrated ownership — top contributor handles 64% of commits
  • ⚠ No license — legally unclear to depend on
  • ⚠ No CI workflows detected
  • ⚠ No test directory detected
  • ⚠ Scorecard: marked unmaintained (0/10)
  • ⚠ Scorecard: default branch unprotected (0/10)

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests + OpenSSF Scorecard</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live sjmoran/deeplpf-image-enhancement repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/sjmoran/deeplpf-image-enhancement.

What it runs against: a local clone of sjmoran/deeplpf-image-enhancement — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in sjmoran/deeplpf-image-enhancement | Confirms the artifact applies here, not a fork | | 2 | Default branch master exists | Catches branch renames | | 3 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 4 | Last commit ≤ 658 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>sjmoran/deeplpf-image-enhancement</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of sjmoran/deeplpf-image-enhancement. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/sjmoran/deeplpf-image-enhancement.git
#   cd deeplpf-image-enhancement
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of sjmoran/deeplpf-image-enhancement and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "sjmoran/deeplpf-image-enhancement(\\.git)?\\b" \\
  && ok "origin remote is sjmoran/deeplpf-image-enhancement" \\
  || miss "origin remote is not sjmoran/deeplpf-image-enhancement (artifact may be from a fork)"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "main.py" \\
  && ok "main.py" \\
  || miss "missing critical file: main.py"
test -f "model.py" \\
  && ok "model.py" \\
  || miss "missing critical file: model.py"
test -f "unet.py" \\
  && ok "unet.py" \\
  || miss "missing critical file: unet.py"
test -f "data.py" \\
  && ok "data.py" \\
  || miss "missing critical file: data.py"
test -f "util.py" \\
  && ok "util.py" \\
  || miss "missing critical file: util.py"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 658 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~628d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/sjmoran/deeplpf-image-enhancement"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

DeepLPF is a CVPR 2020 deep learning model for image enhancement that learns local parametric filters (LPF) to enhance photos by predicting spatially-varying filter parameters at the pixel level. It takes underenhanced RAW or sRGB images and outputs enhanced versions by computing localized filter coefficients, enabling fine-grained adaptive enhancement without global tone curves. Flat structure: main.py and data.py contain training/inference logic, metric.py handles PSNR/SSIM evaluation, adobe5k_dpe/ contains the Adobe DPE dataset splits (images_train.txt, images_test.txt, etc.) and example inputs/outputs for reproducibility. Model weights and architecture definitions are likely in subdirectories not fully enumerated in file list.

Who it's for

Computer vision researchers and image processing engineers working on automated photo enhancement pipelines, particularly those targeting dataset-driven enhancement alternatives to manual Lightroom-style editing. Used by photographers, post-processing engineers, and ML practitioners interested in learnable image-to-image translation for enhancement tasks.

Maturity & risk

Moderately mature research code from 2020 (4 years old). The repo includes pre-trained models, example test data with ground truth comparisons (adobe5k_dpe/), and comprehensive example inputs/outputs showing PSNR/SSIM metrics. However, it's explicitly designed for batch size=1 only and the README warns users to contact authors for training difficulties, suggesting limited ongoing maintenance and potential API brittleness.

High risk for production use: (1) batch size=1 limitation is architectural, not a trivial fix; (2) tight torch==1.7.1 and torchvision==0.8.2 version pins from 2020 create compatibility problems with modern CUDA/hardware; (3) single maintainer (sjmoran) with no visible CI/CD pipeline or test suite; (4) no recent commits visible in provided metadata suggests dormant status post-publication.

Active areas of work

No active development visible. The repo appears frozen post-CVPR-2020 publication. Example test results in adobe5k_dpe/deeplpf_example_test_inference/ show model outputs with metrics (e.g., 'PSNR_34.596_SSIM_0.980'), indicating validation against ground truth in adobe5k_dpe/deeplpf_example_test_output/, but no PRs, issues, or commits are referenced in the provided metadata.

Get running

git clone https://github.com/sjmoran/deeplpf-image-enhancement.git
cd deeplpf-image-enhancement
pip install -r requirements.txt  # install numpy==1.22.0 torch==1.7.1 torchvision==0.8.2 etc.
python main.py  # check README for exact args (likely --model train/test flags)

Daily commands:

# Training (see main.py for flags):
python main.py --mode train --dataset adobe5k_dpe --batch_size 1

# Inference on test set:
python main.py --mode test --dataset adobe5k_dpe

# Single image enhancement:
python main.py --mode inference --input_path adobe5k_dpe/deeplpf_example_test_input/a4576-DSC_0217_input.png

Exact args require inspecting main.py; README snippet does not detail CLI.

Map of the codebase

  • main.py — Entry point for training and inference; orchestrates the entire pipeline with data loading, model training, validation, and testing workflows.
  • model.py — Defines the DeepLPF neural network architecture; core model that learns local parametric filters for image enhancement.
  • unet.py — U-Net backbone encoder-decoder architecture; fundamental component used by the model to generate filter parameters.
  • data.py — Dataset loader and preprocessing pipeline; handles Adobe5K and other dataset formats with image pair loading and augmentation.
  • util.py — Utility functions for tensor/image conversions, file I/O, and model checkpointing; heavily used across training and inference.
  • metric.py — Image quality metrics (PSNR, SSIM) calculation; used for validation and test set evaluation to measure enhancement quality.
  • requirements.txt — Python dependencies pinned to specific versions; critical for environment reproducibility (PyTorch 1.7.1, numpy 1.22.0, etc.).

Components & responsibilities

  • model.py (DeepLPF class) (PyTorch nn.Module, U-Net backbone) — Core learning model; takes RGB image, predicts local parametric filter parameters (learnable kernels) at each pixel, applies filters element-wise
    • Failure mode: Numerical inst

How to make changes

Add a new dataset and train DeepLPF

  1. Create a new dataset directory (e.g., my_dataset/) with images_train.txt, images_valid.txt, images_test.txt listing input/output image pairs (adobe5k_dpe/images_train.txt)
  2. Add a new dataset class in data.py following the Adobe5K pattern, implementing __getitem__ to load paired images (data.py)
  3. Modify main.py to instantiate your dataset class and set --dataset_dir to your directory path (main.py)
  4. Run training: python main.py --dataset_dir my_dataset --dataset_name my_dataset --num_epochs 500 (main.py)

Perform inference on custom images

  1. Load a pre-trained model checkpoint from pretrained_models/adobe_dpe/ in your Python script (util.py)
  2. Use model.py to instantiate DeepLPF and load checkpoint weights via util.load_checkpoint() (model.py)
  3. Convert input image to tensor using util.im2tensor(), run model.forward(), and convert output back with util.tensor2im() (util.py)
  4. Optionally compute quality metrics (PSNR/SSIM) using metric.py if ground-truth is available (metric.py)

Modify loss function or training procedure

  1. In main.py, locate the loss computation section (typically L1 or MSE loss) in the training loop (main.py)
  2. Replace the loss function (e.g., from nn.L1Loss() to perceptual loss) while keeping the backward/optimizer step (main.py)
  3. Adjust learning rate schedule or optimizer (default Adam) in the trainer initialization section of main.py (main.py)
  4. Re-run training with python main.py --dataset_dir ... --num_epochs ... and monitor loss curves (main.py)

Evaluate model on test set and generate metrics

  1. In main.py, call the test/validation function after training completes (or load a checkpoint) (main.py)
  2. The test loop uses metric.py to compute PSNR and SSIM on predicted vs. ground-truth images (metric.py)
  3. Results are logged to console and saved in checkpoint filename (e.g., testpsnr_23.904_testloss_0.031) (main.py)

Why these technologies

  • PyTorch 1.7.1 — Provides dynamic computational graphs, automatic differentiation, and GPU support for efficient neural network training on image enhancement
  • U-Net architecture (unet.py) — Encoder-decoder structure with skip connections captures multi-scale image features needed to predict spatially-varying filter parameters
  • NumPy + SciPy — Numerical computing foundation for tensor operations, image processing, and metric calculations (PSNR/SSIM)
  • scikit-image + Pillow + imageio — Image I/O, preprocessing, and metric computation (SSIM); supports PNG, TIFF, JPG formats from Adobe5K dataset

Trade-offs already made

  • Batch size fixed to 1 only

    • Why: Simplifies implementation; allows per-pixel parametric filters to be spatially-varying without complex batching logic
    • Consequence: Slower training and inference on multi-GPU systems; cannot leverage data parallelism; longer wall-clock time to convergence
  • U-Net backbone for parameter generation

    • Why: Proven architecture for dense prediction tasks; efficient encoder-decoder design with skip connections
    • Consequence: Model size grows with desired filter parameter count; memory footprint may limit high-resolution images on older GPUs
  • Adobe5K dataset as primary benchmark

    • Why: Large curated dataset with professional retouching pairs; standard in image enhancement research
    • Consequence: Model overfits to Adobe's retouching style; may not generalize well to other enhancement tasks (e.g., denoising, deblurring)

Non-goals (don't propose these)

  • Does not support batch sizes > 1 (explicitly stated in README)
  • Does not include real-time mobile/edge deployment optimization (model weights are full-precision PyTorch tensors)
  • Does not implement user-interactive enhancement (e.g., slider-based parameter adjustment)
  • Does not handle video frame enhancement (image-only, single-frame processing)
  • Does not provide automatic hyperparameter tuning or architecture search

Traps & gotchas

(1) Batch size=1 only: main.py and data.py likely have hardcoded batch=1 assumptions in tensor shapes; setting batch_size > 1 will crash silently or produce wrong results. (2) Old PyTorch version: torch==1.7.1 (Nov 2020) requires CUDA 10.1 or 11.0; modern NVIDIA drivers may not support it without custom builds. (3) Adobe DPE dataset split files: images_train.txt, images_test.txt are relative paths; running from wrong directory breaks data loading. (4) Pre-trained model path: code likely expects models/ or checkpoints/ directory with specific naming convention not visible in file list; must check main.py for hardcoded paths. (5) No input validation: code assumes PNG/TIF image formats in adobe5k_dpe/ and will fail on unsupported formats without clear error messages.

Architecture

Concepts to learn

  • Local Parametric Filters (LPF) — DeepLPF's core innovation: instead of learning global tone curves, it predicts spatially-varying filter coefficients per pixel/region, enabling adaptive enhancement that respects local image structure
  • Adobe DPE Dataset (Deep Photo Enhancer) — The standard benchmark for image enhancement research; contains 5,500 input/output pairs from Adobe Lightroom expertly edited; used throughout this codebase (adobe5k_dpe/) for training and validation
  • PSNR and SSIM Metrics — metric.py computes these for quantitative evaluation; PSNR measures pixel-level accuracy while SSIM captures perceptual similarity; critical for reproducibility and benchmarking against baselines
  • Image-to-Image Translation (Conditional GAN paradigm) — DeepLPF frames enhancement as a supervised pixel-level prediction task (input image → enhancement parameters); related to pix2pix and CycleGAN architectures for paired image translation
  • Batch Normalization and Training Instability at Batch Size 1 — The hardcoded batch_size=1 constraint stems from batch norm's statistical instability with single samples; understanding why this breaks larger batches is essential for refactoring the model
  • Residual Learning — Enhancement tasks benefit from skip connections to preserve details; DeepLPF likely uses residual blocks to learn enhancement 'residuals' rather than absolute outputs (common in enhancement literature)
  • Perceptual Loss vs Pixel Loss — Enhancement models often trade L2 (pixel) loss for perceptual losses (VGG features, adversarial) to better match human preferences; metric.py likely uses L2 but the model code may employ perceptual objectives

Related repos

  • chaofengc/Face-Restoration-Benchmark — Similar CNN-based image enhancement and restoration pipeline; shares metric.py patterns (PSNR/SSIM) for quality evaluation
  • cszn/BSRGAN — Related blind super-resolution and enhancement work using PyTorch; demonstrates batch-size-agnostic architecture that DeepLPF could adopt
  • timothybrooks/unprocessing — Inverse task (converting processed RGB back to RAW-like); complements DeepLPF's enhancement pipeline and shares Adobe DPE dataset interest
  • sjmoran/dataquality — Same author; likely related work on image quality assessment and dataset curation for enhancement tasks
  • aasharma/Deep-Photo-Enhancer — Earlier deep learning photo enhancement baseline (2017); DeepLPF builds upon and extends this approach with local parametric filters

PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add batch size validation and multi-batch support refactoring

The README explicitly states 'this code is designed for a batch size of 1' and 'needs re-engineered to support higher batch sizes.' This is a critical limitation. A PR should add validation in main.py and data.py to prevent batch_size > 1, then refactor the model.py and unet.py architecture to support dynamic batch sizes by removing hardcoded batch dimension assumptions. This would unlock significant usability improvements.

  • [ ] Add batch_size validation in main.py that raises clear error if batch_size > 1 with helpful message
  • [ ] Identify and document all batch-dimension-dependent operations in model.py and unet.py (e.g., reshape operations, hardcoded indexing)
  • [ ] Refactor model.py forward pass to handle variable batch dimensions
  • [ ] Refactor unet.py to remove batch-size-specific assumptions in skip connections and layer outputs
  • [ ] Test with batch_size values of 1, 2, 4, 8 on sample data from adobe5k_dpe/deeplpf_example_test_input
  • [ ] Add comment blocks documenting which layers now support multi-batch processing

Add comprehensive unit tests for metric.py evaluation functions

The metric.py file computes PSNR and SSIM scores (visible in example output filenames like 'PSNR_27.426_SSIM_0.879'), but there are no tests validating these computations. A new test file test_metrics.py should verify correctness against known values to prevent silent metric computation bugs.

  • [ ] Create test_metrics.py with test cases for PSNR computation against reference implementations
  • [ ] Create test cases for SSIM computation using skimage.metrics.structural_similarity as ground truth (already in requirements.txt)
  • [ ] Add edge case tests: identical images (PSNR=inf), completely different images, single-channel vs multi-channel inputs
  • [ ] Add batch verification tests to ensure metric.py can handle outputs from different model variants (adobe_dpe, adobe_upe, adobe_distort_and_recover)
  • [ ] Use sample images from adobe5k_dpe/deeplpf_example_test_input and adobe5k_dpe/deeplpf_example_test_output for integration tests
  • [ ] Document expected metric ranges for each dataset variant in test docstrings

Add data loading integration tests and validation schema

The data.py file loads from multiple dataset splits (images_train.txt, images_valid.txt, images_test.txt) with inconsistent filename suffixes (some .tif, some .png, some with _input/_gt/_output suffixes). There's no validation that data loader handles all variants correctly. Add tests to prevent dataset loading regressions.

  • [ ] Create test_data_loading.py that validates data.py can load all examples from adobe5k_dpe/ subdirectories
  • [ ] Add tests verifying that matching input/output pairs are correctly paired (e.g., a4514-kme_0258.tif and a4514-kme_0258_gt.png)
  • [ ] Add tests for edge cases: missing ground truth files, mismatched image formats, corrupted .tif/.png files
  • [ ] Document expected directory structure and naming conventions in data.py docstrings with examples from adobe5k_dpe/
  • [ ] Add validation in data.py to raise clear errors when image_input and image_target shape mismatches occur
  • [ ] Test compatibility with all three pretrained model variants which may expect different input formats

Good first issues

  • Add unit tests for metric.py (test PSNR/SSIM computation against known-good OpenCV/scikit-image outputs); currently no test/ directory visible, making it hard to verify metric correctness
  • Create a minimal Dockerfile or requirements-dev.txt with pinned numpy==1.22.0, torch==1.7.1 to eliminate 'it works on my machine' environment issues; test on Python 3.8 and 3.9 explicitly
  • Document the exact CLI arguments for main.py with examples (e.g., python main.py --mode train --dataset adobe5k_dpe --input_dir /path/to/inputs --epochs 100); the README does not show command signatures

Top contributors

Recent commits

  • e610b2b — Update README.md (PierreMarza)
  • 22dbdf8 — Merge pull request #22 from sjmoran/dependabot/pip/scipy-1.10.0 (sjmoran)
  • bb6fd7b — Bump scipy from 1.5.2 to 1.10.0 (dependabot[bot])
  • 7fd269e — Merge pull request #18 from sjmoran/dependabot/pip/numpy-1.22.0 (sjmoran)
  • af3052f — Bump numpy from 1.19.2 to 1.22.0 (dependabot[bot])
  • eaea415 — Update README.md (sjmoran)
  • 9551718 — Added asserts for batch size (Sean Moran)
  • cca80a6 — Merge branch 'master' of https://github.com/sjmoran/DeepLPF (Sean Moran)
  • 94292a3 — Added assertion statements for batch size (Sean Moran)
  • 1739458 — Update requirements.txt (sjmoran)

Security observations

  • High · Outdated and Vulnerable Dependency: numpy==1.22.0 — requirements.txt. numpy 1.22.0 is outdated and contains known security vulnerabilities. Current stable versions are significantly newer (1.24+). This version may be susceptible to denial of service or other attacks through maliciously crafted inputs. Fix: Update numpy to the latest stable version (>=1.24.0). Run 'pip install --upgrade numpy' and test compatibility with the codebase.
  • High · Outdated and Vulnerable Dependency: torch==1.7.1 — requirements.txt. torch 1.7.1 is significantly outdated (released in 2020). Current versions are 2.0+. This version may contain multiple security vulnerabilities and bugs. PyTorch older versions have had known CVEs. Fix: Update torch to a recent stable version (>=2.0.0). Verify model compatibility with newer versions and retrain/validate if necessary.
  • High · Outdated and Vulnerable Dependency: torchvision==0.8.2 — requirements.txt. torchvision 0.8.2 is outdated and should be paired with torch 1.7.1. Current versions are 0.15+. This version may contain security vulnerabilities and compatibility issues. Fix: Update torchvision to match the torch version (>=0.15.0 for torch 2.0+). Ensure compatibility with updated torch.
  • High · Insecure Pillow Dependency Specification — requirements.txt. The requirement 'pillow>=8.3.2' is overly permissive. While it specifies a minimum version, pillow 8.3.2 itself is outdated. No upper bound is specified, which could lead to unexpected breaking changes or security issues with future versions. Fix: Update to 'pillow>=10.0.0' and consider pinning to a specific major version like 'pillow>=10.0.0,<11.0.0' for stability.
  • Medium · Outdated Dependency: scipy==1.10.0 — requirements.txt. scipy 1.10.0 is from early 2023 and is outdated. More recent versions may include security fixes and performance improvements. Fix: Update scipy to the latest stable version (>=1.11.0). Test compatibility with the image processing pipeline.
  • Medium · Outdated Dependency: matplotlib==3.3.2 — requirements.txt. matplotlib 3.3.2 is significantly outdated (released in 2020). Current versions are 3.7+. This may contain security vulnerabilities and compatibility issues. Fix: Update matplotlib to version 3.7+ or latest stable. Verify that visualization outputs remain correct.
  • Medium · Outdated Dependency: imageio==2.9.0 — requirements.txt. imageio 2.9.0 is outdated. File I/O libraries are security-sensitive as they process external input. Current versions are 2.31+. Fix: Update imageio to the latest stable version (>=2.31.0). This is critical since the codebase heavily processes image files.
  • Medium · Duplicate/Conflicting Dependency: skimage==0.0 — requirements.txt. The package 'skimage==0.0' is a dummy package and should not be present. The actual package is 'scikit-image==0.18.1' which is already specified. This indicates confused or incorrect dependency management. Fix: Remove 'skimage==0.0' from requirements.txt. The scikit-image package is already properly specified.
  • Medium · No Input Validation Visible for Image Loading — data.py, main.py (inferred). The codebase loads and processes image files from external sources (adobe5k_dpe directory and user inputs). Without visible input validation in the code structure, there could be risks from maliciously crafted image files, especially with outdated imageio and pillow versions. Fix: Implement strict file type validation, file size

LLM-derived; treat as a starting point, not a security audit.

Where to read next


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.