RepoPilotOpen in app →

sjmoran/curl-image-enhancement

Code for the ICPR 2020 paper: "CURL: Neural Curve Layers for Image Enhancement"

AVOID

Stale and unlicensed — last commit 3y ago

worst-case
Use as dependencyAVOID

no license — legally unclear; last commit was 3y ago…

Fork & modifyAVOID

no license — can't legally use code; no tests detected…

Learn fromGO

Documented and popular — useful reference codebase to read through.

Deploy as-isAVOID

no license — can't legally use code; last commit was 3y ago…

  • 5 active contributors
  • Stale — last commit 3y ago
  • Small team — 5 top contributors
  • Single-maintainer risk — top contributor 94% of commits
  • No license — legally unclear to depend on
  • No CI workflows detected
  • No test directory detected
  • Scorecard: marked unmaintained (0/10)
  • Scorecard: default branch unprotected (0/10)
What would change the verdict?
  • Use as dependency AVOIDWAIT if: publish a permissive license (MIT, Apache-2.0, etc.); 1 commit in the last 365 days
  • Fork & modify AVOIDWAIT if: add a LICENSE file
  • Deploy as-is AVOIDWAIT if: add a LICENSE file

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests + OpenSSF Scorecard

Embed this verdict

[![RepoPilot: AVOID](https://repopilot.app/api/badge/sjmoran/curl-image-enhancement)](https://repopilot.app/r/sjmoran/curl-image-enhancement)

Paste into your README — the badge live-updates from the latest cached analysis.

Onboarding doc

Onboarding: sjmoran/curl-image-enhancement

Generated by RepoPilot · 2026-05-05 · Source

Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/sjmoran/curl-image-enhancement shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

Verdict

AVOID — Stale and unlicensed — last commit 3y ago

  • 5 active contributors
  • ⚠ Stale — last commit 3y ago
  • ⚠ Small team — 5 top contributors
  • ⚠ Single-maintainer risk — top contributor 94% of commits
  • ⚠ No license — legally unclear to depend on
  • ⚠ No CI workflows detected
  • ⚠ No test directory detected
  • ⚠ Scorecard: marked unmaintained (0/10)
  • ⚠ Scorecard: default branch unprotected (0/10)

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests + OpenSSF Scorecard</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live sjmoran/curl-image-enhancement repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/sjmoran/curl-image-enhancement.

What it runs against: a local clone of sjmoran/curl-image-enhancement — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in sjmoran/curl-image-enhancement | Confirms the artifact applies here, not a fork | | 2 | Default branch master exists | Catches branch renames | | 3 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 4 | Last commit ≤ 1064 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>sjmoran/curl-image-enhancement</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of sjmoran/curl-image-enhancement. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/sjmoran/curl-image-enhancement.git
#   cd curl-image-enhancement
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of sjmoran/curl-image-enhancement and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "sjmoran/curl-image-enhancement(\\.git)?\\b" \\
  && ok "origin remote is sjmoran/curl-image-enhancement" \\
  || miss "origin remote is not sjmoran/curl-image-enhancement (artifact may be from a fork)"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "main.py" \\
  && ok "main.py" \\
  || miss "missing critical file: main.py"
test -f "model.py" \\
  && ok "model.py" \\
  || miss "missing critical file: model.py"
test -f "data.py" \\
  && ok "data.py" \\
  || miss "missing critical file: data.py"
test -f "metric.py" \\
  && ok "metric.py" \\
  || miss "missing critical file: metric.py"
test -f "util.py" \\
  && ok "util.py" \\
  || miss "missing critical file: util.py"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 1064 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~1034d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/sjmoran/curl-image-enhancement"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

CURL (Neural Curve Layers) is an image enhancement model that learns per-pixel tone curves to globally enhance photos using a differentiable neural curve layer, published at ICPR 2020. It maps raw input images to enhanced outputs by predicting learned RGB adjustment curves, achieving state-of-the-art results on the Adobe5K dataset without requiring semantic segmentation or local processing. Monolithic structure: core model in model.py, training loop in main.py, data loading in data.py and raw_ted.py/rgb_ted.py (TED vs RGB input variants), metrics in metric.py. Pretrained model weights in pretrained_models/adobe_dpe/. Example inference images in adobe5k_dpe/ with input/output/test directories. Jupyter notebooks (Colab Demo.ipynb) for interactive demos. Companion archives (CURL_large_batch.tar.gz, CURL_refactored.tar.gz) contain community-maintained variants.

Who it's for

Computer vision researchers and practitioners working on image enhancement, particularly those interested in low-level vision tasks like photo retouching. Photographers or image processing engineers who want a learnable alternative to hand-crafted tone curve adjustments.

Maturity & risk

The project is research-grade and relatively stable (ICPR 2020 publication), with pretrained weights for Adobe5K dataset included. However, the README explicitly warns the codebase only supports batch size 1 and notes community patches exist (CURL_large_batch.tar.gz, CURL_refactored.tar.gz from 2022) that the original authors have not validated. Active maintenance appears limited — no indication of recent commits or CI/CD.

Batch size 1 limitation is a serious production constraint requiring significant refactoring for real-world use. Dependencies are pinned to PyTorch 1.7.1 and torchvision 0.8.2 (2020-era), creating compatibility risks with modern CUDA/GPU setups. The codebase is single-author research code with no test suite visible, and community-provided patches (barbodpj, mahdip72) are unvetted by authors. No clear licensing, CI pipeline, or issue resolution SLA.

Active areas of work

Project appears dormant in primary repo. Community engagement visible through Issue #27 (batch>1 support via barbodpj, archived as CURL_large_batch.tar.gz) and Issue #31 (refactored version via mahdip72, archived as CURL_refactored.tar.gz), both from 2022. No recent development visible in main branch; maintainer directs users to external patches.

Get running

git clone https://github.com/sjmoran/curl-image-enhancement.git && cd curl-image-enhancement && pip install -r requirements.txt (installs torch==1.7.1, scikit-image, PIL, numpy, scipy). Download pretrained model from pretrained_models/adobe_dpe/ or extract example inputs from adobe5k_dpe/curl_example_test_input/.

Daily commands: Single-image inference: python main.py --test_image path/to/input.png (uses pretrained_models/adobe_dpe/). Training: python main.py --train_dir adobe5k_dpe/ with batch_size=1 hardcoded. Interactive: open Colab Demo.ipynb in Jupyter or Google Colab for step-by-step enhancement demo.

Map of the codebase

  • main.py — Entry point for training and inference; orchestrates the entire pipeline with argument parsing and training/testing loops.
  • model.py — Defines the CURL neural network architecture with curve layers; the core algorithmic innovation of the paper.
  • data.py — Handles dataset loading and preprocessing for Adobe5K; critical for understanding data flow into the model.
  • metric.py — Computes evaluation metrics (PSNR, SSIM); essential for understanding model performance assessment.
  • util.py — Utility functions for image I/O, normalization, and tensor operations; used throughout the codebase.
  • requirements.txt — Specifies PyTorch 1.7.1 and dependency versions; critical for reproducibility given batch-size limitations.

Components & responsibilities

  • data.py DataLoader (PyTorch DataLoader, Pillow, NumPy) — Loads Adobe5K images (input + expert ground truth), normalizes to [0, 1], returns batch_size=1 tensors.
    • Failure mode: FileNotFoundError if image paths invalid; crashes if images_train.txt missing.
  • model.py CURL (PyTorch nn.Module, curve parameterization) — Neural curve layers that learn global tone/color enhancement mappings via learnable curves.
    • Failure mode: Divergence if curve initialization poor; unbounded output if curve activation misconfigured.
  • main.py Training Loop (PyTorch optimizer (Adam), loss functions) — Orchestrates epochs, batches, validation, and checkpointing; manages GPU device placement.
    • Failure mode: CUDA OOM if model/batch too large; checkpoints lost if disk full.
  • metric.py (scikit-image, NumPy) — Computes PSNR and SSIM between prediction and ground truth.
    • Failure mode: Incorrect metric if image ranges mismatch (e.g., [0, 255] vs [0, 1]).

Data flow

  • adobe5k_dpe/ directoriesdata.py DataLoader — Input images and expert adjustments loaded; normalized to [0, 1] tensors.
  • data.py DataLoadermodel.py CURL — Batch of 1 (input image) passed through curve layers.
  • model.py CURLmain.py loss computation — Enhanced output compared to expert ground truth; L1 loss computed.
  • main.pymetric.py — Validation predictions and ground truth sent for PSNR/SSIM computation.
  • main.pypretrained_models/adobe_dpe/ — Best checkpoint saved with metric values in filename for reproducibility.

How to make changes

Add support for a new dataset

  1. Create a new dataset class in data.py following the Adobe5K pattern with train/valid/test splits. (data.py)
  2. Add dataset conditional logic in main.py argument parser and data loading block. (main.py)
  3. Create image list files (images_train.txt, images_valid.txt, images_test.txt) in a new dataset directory. (adobe5k_dpe/images_train.txt)

Train a new CURL model variant with different hyperparameters

  1. Modify model architecture parameters in model.py (e.g., number of curve layers, activation functions). (model.py)
  2. Update command-line arguments in main.py (e.g., learning rate, batch size). (main.py)
  3. Run main.py with new arguments; checkpoints will be saved with validation metrics in filename. (main.py)

Add a new evaluation metric

  1. Implement metric computation function in metric.py (e.g., LPIPS, FID). (metric.py)
  2. Call new metric in main.py validation loop after PSNR/SSIM computation. (main.py)
  3. Include metric in checkpoint filename or logging output in main.py. (main.py)

Why these technologies

  • PyTorch 1.7.1 — Standard deep learning framework for neural network definition and GPU acceleration; critical for curve layer implementations.
  • NumPy, SciPy, scikit-image — Image processing and metric computation (PSNR, SSIM); essential for preprocessing and evaluation.
  • Pillow, imageio — Image I/O for loading/saving PNG/JPG; simplifies data pipeline abstraction.

Trade-offs already made

  • Batch size fixed to 1 during training

    • Why: Architectural constraint of curve layers as currently implemented; simplifies computational state management.
    • Consequence: Training is significantly slower than batch-size > 1 would allow; refactored version required for production use (noted in README as Issue 31).
  • Single GPU assumed; no distributed training

    • Why: Simplifies state management for batch_size=1 constraint; avoids synchronization complexity.
    • Consequence: Cannot scale to multi-GPU setups without significant code refactoring.
  • Global enhancement only (no spatial adaptation)

    • Why: CURL applies uniform curve transforms across entire image; matches paper's scope.
    • Consequence: Spatially-varying image degradations or local artifacts may not be addressed as effectively as patch-based methods.

Non-goals (don't propose these)

  • Does not support batch sizes > 1 in current form (architecture constraint noted in README).
  • Does not implement real-time inference optimization or quantization.
  • Does not handle video enhancement or temporal consistency.
  • Not designed for arbitrary dataset formats; requires Adobe5K-compatible structure (input/expert folders).

Anti-patterns to avoid

  • Hardcoded batch_size=1 assumption (High)model.py, data.py, main.py: Entire codebase assumes batch_size=1; no abstraction for multi-batch training. Refactoring required for production use (referenced in README Issue 31).
  • Metric computation on CPU after GPU inference (Low)metric.py, main.py: Predictions moved to CPU for scikit-image metric computation; avoids GPU memory overhead but adds PCIe latency.
  • No error handling for missing files (Medium)data.py: No try/catch around image loading; crashes immediately if input or expert images missing. Affects reproducibility.

Performance hotspots

  • data.py image loading (undefined) — Single-threaded disk I/O for batch_size=1; no prefetching or async loading

Traps & gotchas

Batch size must be 1 — code will silently fail or produce incorrect results with batch_size>1 without refactoring. Image loading expects specific formats (PNG/JPG in adobe5k_dpe paths) and will error on unsupported formats. PyTorch 1.7.1 may fail on newer CUDA versions; requires downgrading CUDA or PyTorch to ~2020-era versions. Model expects 3-channel input (RGB or demosaiced Bayer); single-channel or 4-channel images need preprocessing. No data augmentation enabled by default in data.py — must manually enable for training stability.

Architecture

Concepts to learn

  • Tone curve learning — CURL's core innovation is learning differentiable per-pixel tone curves (brightness/contrast/color adjustments) as neural network outputs; understanding tone curves is essential to the entire model
  • Demosaicing (Bayer pattern recovery) — raw_ted.py handles Bayer sensor data; demosaicing reconstructs full RGB from single-sensor channels. Critical for understanding raw_ted.py preprocessing
  • PSNR and SSIM metrics — metric.py computes these; PSNR (peak signal-to-noise ratio) and SSIM (structural similarity) are the standard quantitative measures for image enhancement quality reported in the paper
  • Differentiable image processing — CurveLayer in model.py must be fully differentiable for end-to-end training via backprop; enables learning tone curves as continuous functions rather than lookup tables
  • Global vs. local image enhancement — CURL applies the same learned curves globally (per-pixel, same curve everywhere). Understanding why global curves suffice vs. local spatial adaptation justifies the architecture choice
  • Adobe 5K dataset (DPE split) — Paper evaluates solely on Adobe5K DPE (Digital Photo Enhancement); dataset format, train/val/test splits, and input/output pairing defined in data.py and adobe5k_dpe/ directories
  • Batch size=1 limitation and distributed training — README explicitly warns model only supports batch_size=1, blocking multi-GPU training. Understanding why (likely curve layer memory layout) is crucial before attempting modifications

Related repos

  • sjmoran/deep-local-parametrization — Same author's prior work on local parametric image enhancement; predecessor to CURL's global tone curve approach
  • aiff22/DeepLPF — Concurrent approach to learnable local parametric filters for image enhancement; direct alternative to CURL using spatial convolutions instead of curve layers
  • chaofengc/Face-Restoration-Inference — Uses similar PyTorch-based enhancement pipelines for face restoration; shares pretrained model distribution and inference patterns
  • xinntao/Real-ESRGAN — Contemporary PyTorch image enhancement framework with similar architecture (encoder-decoder, pretrained weights, batch inference); shows modern approach to similar problem
  • jiahaoli95/NTIRE2021_IHASR — ICPR/NTIRE image enhancement competition entry; demonstrates reference implementations for Adobe5K dataset and PSNR/SSIM evaluation protocols

PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add batch size validation and multi-batch support tests

The README explicitly warns that batch size must be 1, yet there's no validation in the code and no tests to enforce this constraint. This is a critical issue since users could accidentally train with higher batch sizes and get incorrect results. Create comprehensive tests that validate batch size constraints and add runtime checks in data.py and model.py to catch misconfigurations early.

  • [ ] Add batch size assertion in data.py data loading pipeline
  • [ ] Add batch size validation in model.py forward() method with clear error message
  • [ ] Create test_batch_size.py that verifies rejection of batch_size > 1
  • [ ] Add test case in test suite that confirms batch_size=1 works correctly
  • [ ] Document the batch size limitation in code comments referencing the README warning

Create unified inference script with device/model loading tests

The repo has multiple inference entry points (main.py, rgb_ted.py, raw_ted.py) with unclear differences and no tests validating they produce consistent outputs. Users are confused about which to use. Create a single inference.py with device detection, model loading validation, and unit tests for different input types (PNG, JPG, RAW).

  • [ ] Create inference.py with unified API supporting --model-path, --input-dir, --device flags
  • [ ] Add test_inference.py covering: model checkpoint loading validation, GPU/CPU device handling, input format compatibility (PNG/JPG)
  • [ ] Add tests using actual images from adobe5k_dpe/curl_example_test_input/ directory
  • [ ] Document in README which script to use and when (or deprecate redundant ones)
  • [ ] Verify output matches pretrained_models/adobe_dpe/ results on test images

Add metric calculation validation tests and edge case handling

metric.py calculates PSNR/SSIM which are reported in README and filenames, but there are no tests verifying correctness. The filenames in adobe5k_dpe/curl_example_test_inference/ suggest specific metric values (e.g., 'PSNR_27.756_SSIM_0.982') that could be validated. Add tests to ensure metric.py matches reference implementations.

  • [ ] Create test_metrics.py with known input/output pairs to validate PSNR calculation
  • [ ] Create test_metrics.py with known input/output pairs to validate SSIM calculation
  • [ ] Add edge case tests in metric.py: identical images (should give max SSIM), completely different images
  • [ ] Add test comparing metric.py output against a reference implementation (e.g., scikit-image.metrics)
  • [ ] Validate one example from adobe5k_dpe/curl_example_test_inference/ (e.g., a2803) produces matching PSNR/SSIM in tests

Good first issues

  • Add a test suite: model.py has no unit tests. Create tests/test_model.py validating CurveLayer forward/backward pass with synthetic 1×1×3 and 8×8×3 tensors, plus integration test loading a pretrained model and running inference on adobe5k_dpe/curl_example_test_input/*.png.
  • Write batch size >=2 support patch: refactor main.py training loop to handle dynamic batch sizes and metric.py to aggregate PSNR/SSIM across batches. Test with batch_size=2,4,8 and validate against CURL_large_batch.tar.gz approach.
  • Extend dataset support: add dataset.py class abstracting data.py (currently hardcoded to Adobe5K). Support common enhancement datasets (MIT-Adobe FiveK, RAISE, Kodak) with configurable path lists analogous to images_train.txt. Document in README with example paths.

Top contributors

Recent commits

  • 4be9753 — Merge pull request #32 from sjmoran/dependabot/pip/numpy-1.22.0 (sjmoran)
  • e3f353d — Bump numpy from 1.19.2 to 1.22.0 (dependabot[bot])
  • a4e5953 — Added refactored CURL (shamefacedcrabs)
  • a41e0d1 — Update README.md (sjmoran)
  • c8c04c2 — Update README.md (sjmoran)
  • 7bdca4c — Update README.md (sjmoran)
  • 04d1c3c — Update README.md (sjmoran)
  • a76b6ce — Merge pull request #30 from deshwalmahesh/colab-demo (sjmoran)
  • e40fefe — Added google Colab demo for Inference only (deshwalmahesh)
  • ffc4829 — Update README.md (sjmoran)

Security observations

The codebase has a POOR security posture primarily due to severely outdated dependencies with known vulnerabilities. This is a machine learning/image processing project with no apparent web-facing components, reducing the attack surface, but the use of vulnerable libraries (particularly Pillow, numpy, and torch from 2020-2021) poses significant risk if this code processes untrusted image data. The main concerns are: (1) unpatched vulnerabilities in image processing libraries, (2) permissive dependency constraints, (3) potential input validation gaps in image loading. Immediate action required: update all dependencies and implement input validation for image processing operations.

  • High · Outdated and Vulnerable Dependencies — requirements.txt. The project uses significantly outdated dependencies with known vulnerabilities. Specifically: numpy==1.22.0 (from Jan 2022), torch==1.7.1 (from Dec 2020), torchvision==0.8.2 (from Oct 2020), scipy==1.5.2 (from Jun 2020), and Pillow>=8.1.2. These versions are several major versions behind current releases and likely contain unpatched security vulnerabilities including potential code execution flaws in image processing libraries. Fix: Update all dependencies to their latest stable versions. Run 'pip list --outdated' and update packages systematically. Pay special attention to Pillow and numpy which have had multiple security updates. Consider using dependency vulnerability scanners like 'safety' or 'pip-audit'.
  • High · Pillow Version Constraint Too Permissive — requirements.txt - Pillow>=8.1.2. The Pillow dependency is specified as '>=8.1.2' without an upper bound, which may pull in versions that have compatibility issues with the outdated PyTorch and other dependencies. Additionally, Pillow 8.1.2 itself (from Mar 2021) contains known vulnerabilities (CVE-2021-23437, CVE-2021-25287, and others). Fix: Pin Pillow to a specific secure version (e.g., Pillow==10.x.x) and add upper bounds to critical dependencies. Use format: 'Pillow>=10.0.0,<11.0.0' to allow patch updates while preventing major version mismatches.
  • Medium · Unused Dependency with Incorrect Naming — requirements.txt - skimage==0.0. The requirements.txt includes 'skimage==0.0', which appears to be an incorrect or dummy package. The actual scikit-image package is already listed separately. This could indicate copy-paste errors in dependency management and may cause confusion or installation issues. Fix: Remove the 'skimage==0.0' entry from requirements.txt as scikit_image is already specified. Verify all dependencies are intentional and correctly named.
  • Medium · No Input Validation in Data Loading — data.py. The codebase includes data loading functionality (data.py) but without visible validation logic, there's risk of arbitrary file read attacks or malformed image processing. Image files from untrusted sources could potentially trigger vulnerabilities in PIL/imageio libraries. Fix: Implement strict input validation: verify file extensions, use secure file handling, validate image dimensions before processing, and implement file size limits. Consider sandboxing image processing operations.
  • Low · Hardcoded Model Paths — pretrained_models/ directory structure and main.py. Pretrained model paths appear to be hardcoded throughout the codebase. While not a direct security risk, this reduces flexibility and could lead to path traversal vulnerabilities if user input is concatenated with these paths. Fix: Use configuration files or environment variables for model paths. Implement path validation to prevent directory traversal attacks. Use os.path.join() and validate against a whitelist of allowed directories.
  • Low · No Requirements Versioning for Scipy — requirements.txt - scipy==1.5.2. scipy==1.5.2 is extremely outdated (Jun 2020) and may have unknown vulnerabilities. While scientific libraries are lower risk than network-facing code, numerical libraries can have memory safety issues. Fix: Update scipy to version 1.11.x or later. Test compatibility with torch and numpy updates.

LLM-derived; treat as a starting point, not a security audit.

Where to read next


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.