RepoPilotOpen in app →

sjmoran/curl-image-enhancement

Code for the ICPR 2020 paper: "CURL: Neural Curve Layers for Image Enhancement"

AVOID

Stale and unlicensed — last commit 3y ago

worst-case
Use as dependencyAVOID

no license — legally unclear; last commit was 3y ago…

Fork & modifyAVOID

no license — can't legally use code; no tests detected…

Learn fromGO

Documented and popular — useful reference codebase to read through.

Deploy as-isAVOID

no license — can't legally use code; last commit was 3y ago…

  • 5 active contributors
  • Stale — last commit 3y ago
  • Small team — 5 top contributors
  • Single-maintainer risk — top contributor 94% of commits
  • No license — legally unclear to depend on
  • No CI workflows detected
  • No test directory detected
  • Scorecard: marked unmaintained (0/10)
  • Scorecard: default branch unprotected (0/10)
What would change the verdict?
  • Use as dependency AVOIDWAIT if: publish a permissive license (MIT, Apache-2.0, etc.); 1 commit in the last 365 days
  • Fork & modify AVOIDWAIT if: add a LICENSE file
  • Deploy as-is AVOIDWAIT if: add a LICENSE file

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests + OpenSSF Scorecard

Embed this verdict

[![RepoPilot: AVOID](https://repopilot.app/api/badge/sjmoran/curl-image-enhancement)](https://repopilot.app/r/sjmoran/curl-image-enhancement)

Paste into your README — the badge live-updates from the latest cached analysis.

Onboarding doc

Onboarding: sjmoran/curl-image-enhancement

Generated by RepoPilot · 2026-05-05 · Source

Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Read in the suggested order before editing unfamiliar code. The reading-order list is computed from the actual import graph, not LLM guesses; reading bottom-up materially reduces wrong-edit risk.
  3. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  4. Cite source on changes. When proposing an edit, cite the specific path/to/file.ext:Lstart-Lend you're reasoning about, the same way RepoPilot's own RAG cites code in https://repopilot.app/r/sjmoran/curl-image-enhancement.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

Verdict

AVOID — Stale and unlicensed — last commit 3y ago

  • 5 active contributors
  • ⚠ Stale — last commit 3y ago
  • ⚠ Small team — 5 top contributors
  • ⚠ Single-maintainer risk — top contributor 94% of commits
  • ⚠ No license — legally unclear to depend on
  • ⚠ No CI workflows detected
  • ⚠ No test directory detected
  • ⚠ Scorecard: marked unmaintained (0/10)
  • ⚠ Scorecard: default branch unprotected (0/10)

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests + OpenSSF Scorecard</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live sjmoran/curl-image-enhancement repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/sjmoran/curl-image-enhancement.

What it runs against: a local clone of sjmoran/curl-image-enhancement — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in sjmoran/curl-image-enhancement | Confirms the artifact applies here, not a fork | | 2 | Default branch master exists | Catches branch renames | | 3 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 4 | Last commit ≤ 1064 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>sjmoran/curl-image-enhancement</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of sjmoran/curl-image-enhancement. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/sjmoran/curl-image-enhancement.git
#   cd curl-image-enhancement
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of sjmoran/curl-image-enhancement and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "sjmoran/curl-image-enhancement(\\.git)?\\b" \\
  && ok "origin remote is sjmoran/curl-image-enhancement" \\
  || miss "origin remote is not sjmoran/curl-image-enhancement (artifact may be from a fork)"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "main.py" \\
  && ok "main.py" \\
  || miss "missing critical file: main.py"
test -f "model.py" \\
  && ok "model.py" \\
  || miss "missing critical file: model.py"
test -f "data.py" \\
  && ok "data.py" \\
  || miss "missing critical file: data.py"
test -f "metric.py" \\
  && ok "metric.py" \\
  || miss "missing critical file: metric.py"
test -f "util.py" \\
  && ok "util.py" \\
  || miss "missing critical file: util.py"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 1064 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~1034d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/sjmoran/curl-image-enhancement"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

Suggested reading order

Computed from the actual import graph (no LLM). Read in this order to learn the codebase from the foundation up — each step builds on the previous ones.

  1. util.py — Foundation: doesn't import anything internally and is imported by 3 other files. Read first to learn the vocabulary.
  2. rgb_ted.py — Foundation: imported by 1, no internal dependencies of its own.
  3. model.py — Built on the foundation; imported by 1 downstream file.
  4. data.py — Built on the foundation; imported by 1 downstream file.
  5. main.py — Layer 2 — application-level code that wires the lower layers together.

TL;DR

CURL (Neural Curve Layers for Global Image Enhancement) is a PyTorch-based deep learning model that performs end-to-end image enhancement by learning pixel-wise curve transformations. It uses neural curve layers to predict per-pixel enhancement curves across RGB channels, enabling global adjustments like exposure, contrast, and color correction without hand-crafted filters. The model was published at ICPR 2020 and achieves state-of-the-art results on the Adobe5k DPE dataset. Flat monorepo: core model logic in model.py, training loop in main.py, data pipeline in data.py (references Adobe5k dataset structure), metrics in metric.py. Example inputs/outputs in adobe5k_dpe/ directory with test images. Pretrained checkpoint in pretrained_models/adobe_dpe/. Three Jupyter notebooks (Colab Demo.ipynb primary) serve as entry points. Utility functions in util.py and rgb_ted.py.

Who it's for

Computer vision researchers and practitioners working on image enhancement, photo editing automation, and low-level vision tasks who need a trainable alternative to traditional curve-based photo editors. Specifically useful for photographers and ML engineers building auto-enhancement pipelines that must generalize across diverse lighting and color conditions.

Maturity & risk

Experimental but established (ICPR 2020 publication, pretrained weights included). The main branch supports batch size 1 only—a significant limitation. Community members have provided refactored versions supporting larger batches (CURL_large_batch.tar.gz, CURL_refactored.tar.gz, added May 2022), but these remain untested by original authors. No CI/testing infrastructure is visible; project appears minimally maintained post-publication.

CRITICAL: Batch size 1 constraint makes the main codebase inefficient for production. Dependencies are pinned to 2020 era (torch==1.7.1, numpy==1.22.0, scipy==1.5.2), risking compatibility issues with modern libraries. Single maintainer (sjmoran) with no visible recent commits or issue triage. Community patches exist but lack official integration, creating maintenance fragmentation.

Active areas of work

No active development visible. README documents two community contributions (April 2022: batch>1 version; May 2022: refactored version) but original authors have not tested or merged them. The presence of CURL_large_batch.tar.gz and CURL_refactored.tar.gz suggests acknowledgment of limitations without official maintenance.

Get running

git clone https://github.com/sjmoran/curl-image-enhancement.git
cd curl-image-enhancement
pip install -r requirements.txt
jupyter notebook "Colab Demo.ipynb"

Alternatively, examine adobe5k_dpe/images_inference.txt for inference examples and load pretrained_models/adobe_dpe/curl_validpsnr_23.073045286204017_...epoch_510_model.pt.

Daily commands: For inference (primary use case): load pretrained checkpoint in model.pt, pass image through forward pass in model.py, output enhanced image. For training: python main.py with Adobe5k dataset directory set via data.py. See Colab Demo.ipynb for end-to-end example with sample images in adobe5k_dpe/curl_example_test_input/.

Map of the codebase

  • main.py — Entry point for training and evaluation; orchestrates the entire CURL pipeline including data loading, model initialization, and checkpoint management.
  • model.py — Defines the CURL neural architecture with curve layers; core implementation of the novel neural curve layer abstraction that is the paper's primary contribution.
  • data.py — Handles dataset loading and preprocessing for Adobe DPE; critical for understanding how training data flows through the system.
  • metric.py — Implements PSNR and SSIM evaluation metrics used throughout training and validation; required for assessing model performance.
  • util.py — Utility functions for image I/O, tensor manipulation, and logging; foundational utilities used across all modules.
  • requirements.txt — Specifies all external dependencies including PyTorch 1.7.1 and image processing libraries; essential for environment setup.
  • pretrained_models/adobe_dpe/curl_validpsnr_23.073045286204017_validloss_0.0701291635632515_testpsnr_23.584083321292365_testloss_0.061363041400909424_epoch_510_model.pt — Pre-trained CURL model checkpoint for Adobe DPE dataset; enables inference without training.

Components & responsibilities

  • data.py (Adobe5kDPE) (PIL, numpy, torch.utils.data.Dataset) — Loads input and reference images, normalizes to [0,1], splits into train/valid/test, returns batches of (input, reference) pairs
    • Failure mode: Missing image files or corrupt image data causes dataset loader to fail; incorrect normalization range breaks model training
  • model.py (CURL network) (torch.nn.Module, PyTorch ops) — Defines neural architecture with curve layers that learn tone curves; outputs enhanced image by applying learned curves per channel
    • Failure mode: Poorly initialized curve parameters or diverging loss causes training instability; batch size > 1 breaks curve computation
  • main.py (training orchestration) (argparse, torch.optim.Adam, checkpoint I/O) — Manages training loop, validates on validation set, saves checkpoints, logs metrics, and runs inference
    • Failure mode: OOM errors on GPU if preprocessing or model forward pass allocates large tensors; checkpoint corruption loses training progress

How to make changes

Train CURL on a new image enhancement dataset

  1. Create a new dataset class in data.py following the Adobe5kDPE pattern with train/valid/test splits (data.py)
  2. Add command-line arguments in main.py to select the new dataset and configure batch size (currently limited to 1) (main.py)
  3. Run training via main.py with appropriate hyperparameters; checkpoints are saved with validation PSNR/SSIM in filename (main.py)

Evaluate CURL on test images for inference

  1. Load pre-trained model weights from pretrained_models/adobe_dpe/ in main.py using the --checkpoint flag (pretrained_models/adobe_dpe/curl_validpsnr_23.073045286204017_validloss_0.0701291635632515_testpsnr_23.584083321292365_testloss_0.061363041400909424_epoch_510_model_weights.pt)
  2. Call model.py forward() with test images and use metric.py to compute PSNR/SSIM if reference images available (model.py)
  3. Use util.py save_image() to write enhanced output images to disk (util.py)

Adapt CURL to raw sensor data instead of sRGB

  1. Modify data.py to load raw Bayer CFA data and normalize to [0,1] range with appropriate demosaicing (data.py)
  2. Use raw_ted.py instead of rgb_ted.py for curve generation with raw sensor responses (raw_ted.py)
  3. Retrain model.py on raw data or load pre-trained weights and fine-tune with --lr and --epochs flags in main.py (main.py)

Why these technologies

  • PyTorch 1.7.1 — Enables dynamic computation graphs for neural curve layer implementation and efficient GPU training of the enhancement network
  • NumPy 1.22.0 & SciPy 1.5.2 — Provides numerical operations for metric computation (PSNR/SSIM) and image tensor manipulation
  • scikit-image 0.18.1 — Offers structural similarity (SSIM) computation and image quality metrics required for training evaluation
  • Pillow & imageio — Handles diverse image format loading (PNG, JPG) from Adobe DPE dataset without dependency on external tools

Trade-offs already made

  • Batch size fixed to 1 during training

    • Why: Simplifies curve layer implementation and training stability; curve layers compute statistics per batch
    • Consequence: Slower training convergence, reduced GPU memory efficiency, incompatible with standard data parallelism; refactored version in CURL_refactored.tar.gz addresses this
  • Global enhancement only (no spatial attention)

    • Why: CURL applies learned tone curves globally across the entire image for simplicity and generalization
    • Consequence: Cannot adapt enhancement to local image regions; trades spatial adaptability for model simplicity
  • Pre-trained checkpoint includes optimizer state in full model

    • Why: Enables resuming training from checkpoints without re-initializing optimizer momentum
    • Consequence: Larger checkpoint file; separate weights-only checkpoint provided for inference-only use

Non-goals (don't propose these)

  • Does not support batch sizes > 1 in the original codebase (architectural limitation of curve layers)
  • Does not perform real-time inference (designed for offline batch processing)
  • Does not include data augmentation beyond standard normalization
  • Does not support distributed training across multiple GPUs/nodes
  • Does not handle video enhancement (image-only enhancement)

Traps & gotchas

Batch size 1 requirement: model.forward() assumes batch dimension is 1; passing B>1 will crash. Adobe5k dataset: data.py expects images at a specific directory structure not provided in repo; you must download Adobe5k DPE separately. Pretrained weights filename: checkpoint name encodes metadata (PSNR, SSIM, epoch); script expects exact format in pretrained_models/adobe_dpe/. Python version: pinned numpy/torch versions are old; may conflict with modern CUDA/conda environments (no environment.yml provided). No validation loop in main.py visible: training may overfit without monitoring code.

Architecture

Concepts to learn

Related repos

  • sjmoran/DeepLPF — Same author's earlier work on learned parametric filters for image enhancement; predecessor architecture to CURL
  • yuanming-hu/exposure — Another neural image enhancement approach using learned transformations; comparable problem domain and competitive baseline
  • chaofengc/Face-Restoration-Benchmark — Comprehensive benchmark framework for image quality metrics (PSNR, SSIM, LPIPS); metric.py uses similar evaluation patterns
  • adobe-research/adobe-dpe — Original Adobe5k DPE dataset repository; CURL trained exclusively on this dataset; essential for reproduction
  • richzhang/PerceptualSimilarity — LPIPS perceptual loss library; complement to MSE-based losses used in metric.py for future enhancement training

PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add batch size abstraction and support higher batch sizes (>=2)

The README explicitly states 'code is designed for a batch size of 1' and 'needs re-engineered to support higher batch sizes.' This is a critical limitation. By investigating model.py and data.py to identify batch-dependent operations (likely in the curve layer implementation), a contributor could refactor these to support variable batch sizes, significantly improving usability and training efficiency. The repo already notes a user (mahdip72) attempted this, suggesting there's a known solution path.

  • [ ] Audit model.py to identify batch-dependent curve layer operations and reshaping logic
  • [ ] Audit data.py to verify DataLoader batch handling
  • [ ] Refactor model.py curve layer forward passes to use dynamic batch dimensions instead of hardcoded assumptions
  • [ ] Add parametric batch size support to main.py training loops
  • [ ] Test with batch_size=2, 4, 8 and verify output shapes and loss values match batch_size=1 results

Add comprehensive unit and integration tests (test_model.py, test_inference.py)

The repo has no test directory or test files despite having example inputs/outputs in adobe5k_dpe/curl_example_test_input and adobe5k_dpe/curl_example_test_inference. A contributor could create a test suite using these provided examples to validate model inference, metrics, and data loading. This would catch regressions and help new contributors verify their changes don't break functionality.

  • [ ] Create tests/test_model.py with unit tests for model.py (forward pass, curve layer outputs, weight shapes)
  • [ ] Create tests/test_inference.py using the 9 provided example images in adobe5k_dpe/ to validate end-to-end inference produces comparable PSNR/SSIM metrics
  • [ ] Create tests/test_metrics.py validating metric.py PSNR and SSIM calculations against reference values in adobe5k_dpe/curl_example_test_inference filenames
  • [ ] Create tests/test_data.py validating data.py correctly loads images from images_train.txt, images_valid.txt, images_test.txt
  • [ ] Add pytest configuration (pytest.ini or pyproject.toml) and document how to run tests in README

Consolidate model variants and add configuration abstraction (config.py)

The repo has multiple code variants (rgb_ted.py, raw_ted.py, CURL_for_RGB_images.zip, CURL_refactored.tar.gz) but no clear abstraction or configuration system. A contributor could create a config.py file that defines model hyperparameters, dataset paths, and variant selection, then refactor main.py to consume this config. This would reduce code duplication, improve clarity on supported variants, and make the codebase more maintainable.

  • [ ] Analyze rgb_ted.py and raw_ted.py to identify differences (input channels, curve layer variants, preprocessing)
  • [ ] Create config.py with ConfigParser or dataclass-based configuration for: model_type (RGB/RAW), batch_size, learning_rate, dataset_path, num_epochs, curve_layer_type
  • [ ] Refactor main.py to load configuration from config.py instead of hardcoded values
  • [ ] Refactor model.py to instantiate appropriate curve layers based on config.model_type
  • [ ] Update README with a 'Configuration' section documenting config.py options and example configs for RGB vs RAW pipelines

Good first issues

  • Add unit tests for CurveLayer forward/backward in test_model.py (currently zero test coverage visible; critical for verifying batch size >1 patches work correctly)
  • Document Adobe5k DPE dataset setup in README with exact download links and directory structure; data.py references paths not explained
  • Create requirements_dev.txt with testing dependencies (pytest, torch-testing utils) and CI config (.github/workflows/test.yml) to validate community patches against original paper results

Top contributors

Recent commits

  • 4be9753 — Merge pull request #32 from sjmoran/dependabot/pip/numpy-1.22.0 (sjmoran)
  • e3f353d — Bump numpy from 1.19.2 to 1.22.0 (dependabot[bot])
  • a4e5953 — Added refactored CURL (shamefacedcrabs)
  • a41e0d1 — Update README.md (sjmoran)
  • c8c04c2 — Update README.md (sjmoran)
  • 7bdca4c — Update README.md (sjmoran)
  • 04d1c3c — Update README.md (sjmoran)
  • a76b6ce — Merge pull request #30 from deshwalmahesh/colab-demo (sjmoran)
  • e40fefe — Added google Colab demo for Inference only (deshwalmahesh)
  • ffc4829 — Update README.md (sjmoran)

Security observations

This research codebase has significant security concerns primarily centered on severely outdated and vulnerable dependencies. The numpy, PyTorch, and scikit-image packages are 2-3 years old with multiple known CVEs. The Pillow version constraint allows installation of vulnerable versions. While the codebase appears to be a machine learning research project without network exposure or data handling of sensitive information, deployment of this code in production would pose substantial risks. The lack of security documentation and input validation guidance further reduces the security posture. Immediate action required: update all dependencies to current versions and establish a security policy for vulnerability disclosure.

  • High · Outdated and Vulnerable Dependencies — requirements.txt. Multiple dependencies have known vulnerabilities and are significantly outdated. numpy==1.22.0 (released Jan 2022), scipy==1.5.2 (released Aug 2020), torch==1.7.1 (released Dec 2020), and torchvision==0.8.2 (released Nov 2020) are all several years old. These versions contain known CVEs including numpy buffer overflow issues, scipy vulnerabilities, and PyTorch security patches. Fix: Update all dependencies to their latest stable versions. Minimum recommendations: numpy>=1.26.0, scipy>=1.11.0, torch>=2.0.0, torchvision>=0.15.0. Run 'pip list --outdated' to identify all outdated packages and update accordingly.
  • High · Insecure Pillow Version Constraint — requirements.txt. Pillow>=8.1.2 is specified with a lower bound that allows installation of versions with known vulnerabilities. Pillow 8.1.2 through 8.3.1 contain several CVEs including image processing vulnerabilities that could lead to denial of service or code execution. Fix: Update to Pillow>=10.0.0 or the latest stable version. Consider using a more restrictive version constraint like 'Pillow>=10.0.0,<11.0.0' to avoid unexpected breaking changes.
  • Medium · Redundant and Misspelled Dependency — requirements.txt. The requirements.txt includes both 'scikit_image==0.18.1' and 'skimage==0.0'. The 'skimage==0.0' entry is problematic as it references a non-existent version (0.0) and appears to be a typo. This could cause installation failures or unexpected behavior. Fix: Remove the 'skimage==0.0' line. If scikit-image is needed, keep only 'scikit-image>=0.19.0' (note the hyphen, not underscore). Verify the package is imported correctly in code.
  • Medium · Missing Security Configuration Documentation — Repository root. No security hardening guidelines, security policy, or vulnerability disclosure process is documented. The repository lacks SECURITY.md or equivalent documentation for responsible disclosure of vulnerabilities. Fix: Create a SECURITY.md file documenting vulnerability reporting procedures. Add security scanning to CI/CD pipeline. Include guidelines for secure usage of the model and data handling.
  • Low · No Input Validation Documentation — main.py, data.py, model.py. The codebase processes image files and model inputs without documented validation procedures. While primarily a research project, lack of input sanitization documentation could lead to issues if deployed. Fix: Add input validation for image files (format, size, dimensions). Document expected input constraints. Add checks for malformed model files and untrusted model weights loading.
  • Low · Pretrained Model Weights Trust — pretrained_models/adobe_dpe/. Pretrained model files (.pt files) are included without integrity verification or signature validation. Users loading these models should verify authenticity, as compromised weights could silently alter model behavior. Fix: Generate and publish SHA256 checksums for all pretrained model files. Document how to verify model integrity. Consider signing model files with GPG keys. Add checksum verification in loading code.

LLM-derived; treat as a starting point, not a security audit.

Where to read next


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

AVOID · sjmoran/curl-image-enhancement — RepoPilot Verdict