alexjc/neural-enhance
Super Resolution for images using deep learning.
Stale — last commit 5y ago
weakest axiscopyleft license (AGPL-3.0) — review compatibility; last commit was 5y ago…
no tests detected; no CI workflows detected…
Documented and popular — useful reference codebase to read through.
last commit was 5y ago; no CI workflows detected
- ✓10 active contributors
- ✓AGPL-3.0 licensed
- ⚠Stale — last commit 5y ago
- ⚠Single-maintainer risk — top contributor 88% of recent commits
- ⚠AGPL-3.0 is copyleft — check downstream compatibility
- ⚠No CI workflows detected
- ⚠No test directory detected
What would change the summary?
- →Use as dependency Concerns → Mixed if: relicense under MIT/Apache-2.0 (rare for established libs)
- →Fork & modify Mixed → Healthy if: add a test suite
- →Deploy as-is Mixed → Healthy if: 1 commit in the last 180 days
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Great to learn from" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/alexjc/neural-enhance)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/alexjc/neural-enhance on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: alexjc/neural-enhance
Generated by RepoPilot · 2026-05-07 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/alexjc/neural-enhance shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
WAIT — Stale — last commit 5y ago
- 10 active contributors
- AGPL-3.0 licensed
- ⚠ Stale — last commit 5y ago
- ⚠ Single-maintainer risk — top contributor 88% of recent commits
- ⚠ AGPL-3.0 is copyleft — check downstream compatibility
- ⚠ No CI workflows detected
- ⚠ No test directory detected
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live alexjc/neural-enhance
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/alexjc/neural-enhance.
What it runs against: a local clone of alexjc/neural-enhance — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in alexjc/neural-enhance | Confirms the artifact applies here, not a fork |
| 2 | License is still AGPL-3.0 | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 1985 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of alexjc/neural-enhance. If you don't
# have one yet, run these first:
#
# git clone https://github.com/alexjc/neural-enhance.git
# cd neural-enhance
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of alexjc/neural-enhance and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "alexjc/neural-enhance(\\.git)?\\b" \\
&& ok "origin remote is alexjc/neural-enhance" \\
|| miss "origin remote is not alexjc/neural-enhance (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(AGPL-3\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"AGPL-3\\.0\"" package.json 2>/dev/null) \\
&& ok "license is AGPL-3.0" \\
|| miss "license drift — was AGPL-3.0 at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f "enhance.py" \\
&& ok "enhance.py" \\
|| miss "missing critical file: enhance.py"
test -f "requirements.txt" \\
&& ok "requirements.txt" \\
|| miss "missing critical file: requirements.txt"
test -f "train/ne2x-photo-default.sh" \\
&& ok "train/ne2x-photo-default.sh" \\
|| miss "missing critical file: train/ne2x-photo-default.sh"
test -f "README.rst" \\
&& ok "README.rst" \\
|| miss "missing critical file: README.rst"
test -f "docker-gpu.df" \\
&& ok "docker-gpu.df" \\
|| miss "missing critical file: docker-gpu.df"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 1985 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~1955d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/alexjc/neural-enhance"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Neural Enhance uses deep convolutional neural networks trained with Theano/Lasagne to perform 2x–4x super-resolution upscaling of images by hallucinating plausible high-frequency details. The enhance.py script applies pre-trained models to repair JPEG artifacts or zoom photos while running on CPU or NVIDIA GPU (CUDA), outputting images with _ne2x.png or _ne4x.png suffixes. Simple monolithic structure: enhance.py is the main entry point (single file); train/ contains shell scripts for model training (ne1x-photo-repair.sh, ne2x-photo-default.sh); docs/ holds example GIFs and PNGs; Docker files for containerization. No src/ subdirectories—all logic appears to be in enhance.py and imported Theano/Lasagne models.
👥Who it's for
Photographers, archivists, and digital restorers who want to upscale low-resolution or degraded photos without manual editing; also deep-learning practitioners studying generative models and perceptual loss functions via the training scripts in train/.
🌱Maturity & risk
Experimental but historically notable (~2016 era, evidenced by Theano 0.8.2 pinning). No visible CI/CD, test suite, or recent commits in the provided snapshot. The project gained viral attention ("as seen on TV") but shows signs of age—dependencies are pinned to outdated versions and the single-maintainer structure suggests maintenance risk.
Severe technical debt: locked to Theano 0.8.2 (unmaintained since 2017) and a specific Lasagne commit hash, making pip installs fragile on modern Python 3.9+. No unit tests visible. Docker files exist but their maintenance status is unclear. GPU-only performance optimization (5s for 1080p) suggests the CPU path may be slow enough to discourage use, yet no progress bars or logging visible in the code structure.
Active areas of work
Based on the file list snapshot alone, no active development is evident—no recent commits, PRs, or issues visible. The repository appears dormant; the README references a YouTube demo and web viewer (http://enhance.nucl.ai/) but no indication of ongoing work.
🚀Get running
git clone https://github.com/alexjc/neural-enhance.git
cd neural-enhance
pip install -r requirements.txt
python3 enhance.py --type=photo --zoom=2 example.jpg
Daily commands:
# Repair JPEG artifacts at 1x zoom (no upscaling)
python3 enhance.py --type=photo --model=repair --zoom=1 broken.jpg
# 2x super-resolution on one or more images
python3 enhance.py --type=photo --zoom=2 file1.jpg file2.jpg
# View output
open *_ne2x.png
# For CPU parallelism
OMP_NUM_THREADS=4 python3 enhance.py --type=photo --zoom=2 input.jpg
🗺️Map of the codebase
enhance.py— Main entry point for the neural enhancement pipeline—handles image loading, model inference, and output generation; all contributors must understand the workflow here.requirements.txt— Pins critical deep learning dependencies (Theano 0.8.2, Lasagne) that define the entire compute environment; version mismatches cause immediate failures.train/ne2x-photo-default.sh— Default training script for 2x super-resolution; establishes the standard model configuration and hyperparameters all users and developers reference.README.rst— Comprehensive documentation of project goals, usage examples, and installation—every new contributor should read this to understand the creative AI hallucination paradigm.docker-gpu.df— GPU Docker configuration encapsulates the full dependency graph (CUDA, Theano, Lasagne); critical for reproducible training and inference environments.
🧩Components & responsibilities
- enhance.py (CLI & Orchestration) (Python, argparse, Pillow, Theano) — Parse command-line arguments, load input image, instantiate Theano model, invoke inference, save output, handle errors.
- Failure mode: Invalid image path, OOM, corrupt model weights → exception printed to stderr; process exits non-zero.
- Theano Model (Compute Graph) (Theano, CUDA (GPU) or BLAS (CPU)) — Symbolic convolutional neural network for 2x or 4x super-resolution; trained weights define the hallucination prior.
- Failure mode: GPU out of memory, compilation errors, invalid input tensor shape → Theano exception raised to enhance.py.
- Training Scripts (Shell) (Bash, Theano, Lasagne, NumPy) — Configure dataset paths, learning rates, epochs, model architecture; launch training via Theano/Lasagne Python API.
- Failure mode: Missing dataset, invalid hyperparameters, GPU timeout → training aborts; no checkpoint recovery.
- Docker Container (Docker, NVIDIA CUDA Toolkit, cuDNN) — Encapsulate CUDA runtime, Theano, Lasagne, Python; ensure reproducible inference across machines.
- Failure mode: CUDA driver mismatch, out of GPU memory, image build failure → container fails to start or runs inference slowly.
🔀Data flow
User input image (disk)→Pillow PIL.Image— File I/O: JPEG/PNG bytes → decoded RGB tensor in memory.PIL.Image tensor→Theano compute graph— Convert NumPy array to Theano tensor; feed through Conv2D/Upsampling layers.Theano model weights (disk)→Theano compute graph— Load .npy or checkpoint files; initialize Theano shared
🛠️How to make changes
Add Support for a New Super-Resolution Scale (e.g., 3x upsampling)
- Create new training script by copying train/ne2x-photo-default.sh, modify scale parameter from 2 to 3, adjust network architecture if needed (
train/ne3x-photo-default.sh) - Update enhance.py to detect and load the new 3x model checkpoint based on command-line argument (
enhance.py) - Document the new scale in README.rst with example results and training command (
README.rst)
Deploy Enhanced Model to Production Container
- Train model using train/ne2x-photo-default.sh, store weights as .npy or model checkpoint (
train/ne2x-photo-default.sh) - Update enhance.py to reference the new model path and load it as default (
enhance.py) - Rebuild GPU Docker image with docker build -f docker-gpu.df, ensuring new model weights are included in image layers (
docker-gpu.df)
Add Custom Training Dataset and Variant
- Create new training script (e.g., train/ne2x-portrait-custom.sh) based on ne2x-photo-default.sh, pointing to custom dataset path (
train/ne2x-portrait-custom.sh) - Update requirements.txt if your dataset loader needs additional libraries (e.g., imageio, scipy) (
requirements.txt) - Document dataset format and training procedure in README.rst under a new 'Custom Training' section (
README.rst)
🔧Why these technologies
- Theano 0.8.2 — Low-level symbolic tensor computation with GPU support (CUDA); enables efficient backpropagation for training deep convolutional networks on GPUs.
- Lasagne (custom fork @61b1ad1) — High-level neural network library built on Theano; provides modular layer abstractions (Conv2D, Upsampling, etc.) for constructing super-resolution architectures.
- Pillow (PIL) — Robust image I/O library; handles JPEG, PNG, TIFF decoding and encoding, essential for real-world photo input/output.
- Docker (GPU & CPU variants) — Containerization ensures reproducibility across environments; GPU variant enables training on NVIDIA hardware with CUDA.
⚖️Trade-offs already made
-
Used Theano 0.8.2 (legacy) instead of modern TensorFlow/PyTorch
- Why: Project was developed when Theano was active (circa 2016); Lasagne ecosystem was mature and optimized for that era.
- Consequence: Codebase is now difficult to maintain or extend (Theano is discontinued); GPU compute graphs are fixed at compile-time, limiting dynamic inference.
-
Hallucination-based super-resolution (neural prior) instead of explicit reconstruction
- Why: Neural networks trained on image datasets learn plausible detail patterns; simpler to train than physics-based inverse models.
- Consequence: Output is artistically enhanced but not photometrically accurate; users must accept 'creative AI' reinterpretation of pixels.
-
Shell scripts for training configuration instead of structured config format (YAML/JSON)
- Why: Simple, minimal dependencies; shell is portable and familiar to practitioners.
- Consequence: Hard to version-control hyperparameter sweeps; no programmatic config validation; difficult to automate or parallelize training.
🚫Non-goals (don't propose these)
- Does not support video super-resolution (image-only focus).
- Does not include model retraining UI or web interface (CLI-only).
- Not designed for real-time or streaming inference (batch/offline model).
- Does not support Windows natively (Theano GPU support is Linux/macOS).
- No authentication, cloud sync, or collaborative training features.
🪤Traps & gotchas
Theano/Lasagne are EOL (2017+)—modern CUDA, cuDNN, and Python 3.9+ may break the stack; test on Python 3.4–3.6 if possible. Pre-trained models are not checked into git (GitHub releases only)—downloading them is a manual step not shown in the README. GPU memory assumptions: large images may OOM on cards < 2GB VRAM. The --device=gpu0 argument is hardcoded; multi-GPU is not explicitly supported. OMP_NUM_THREADS environment variable is only effective on CPU; no documentation on how many parallel processes a single machine can sustain.
🏗️Architecture
💡Concepts to learn
- Perceptual Loss Function — Neural Enhance uses perceptual loss (comparing features from a pre-trained CNN) rather than pixel-wise MSE to train hallucination; understanding this is key to why the output looks 'creative' rather than blurry.
- Upsampling / Transposed Convolution — The core operation for 2x–4x resolution increase; Lasagne's
Upsampling2DLayeror deconvolution layers are used in the model, and their stride/kernel choices determine output quality. - GPU Memory Management in Theano — The
--device=gpu0path requires understanding Theano's memory allocation; large images cause OOM without careful batch sizing or graph optimization. - Hallucination in Generative Models — The README explicitly notes the network 'hallucinates' plausible details rather than reconstructing ground truth; this is a fundamental trade-off in super-resolution and distinguishes creative upscaling from lossless upsampling.
- Theano Symbolic Computation Graph — Theano differs from eager frameworks (TF eager, PyTorch); you must understand how
theano.function()compiles graphs and how--devicedirects compilation to CPU or GPU backends. - Multi-GPU Parallelism via OMP_NUM_THREADS — The README hints at running 4–8 processes in parallel on CPU by setting
OMP_NUM_THREADS=4; this is a process-level parallelism strategy, not true multi-GPU, and is CPU-specific.
🔗Related repos
idealo/image-super-resolution— Modern super-resolution framework using TensorFlow/Keras; directly comparable alternative solving the same 2x–4x upscaling problem.leftthomas/SRGAN— Generative adversarial network approach to super-resolution; represents a successor paradigm to the perceptual-loss approach used in neural-enhance.openai/whisper— Related ecosystem: state-of-the-art pre-trained model released as GPL/MIT, demonstrating modern best practices for distributing large neural networks (contrast with neural-enhance's manual download approach).Theano/Theano— The deep learning backend used throughout; understanding Theano's compute graph and symbolic differentiation is required to modify model architecture.Lasagne/Lasagne— High-level neural network library built on Theano; the API for defining, training, and loading models in thetrain/scripts andenhance.py.
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add Docker entrypoint script and improve docker-cpu.df/docker-gpu.df documentation
The repo has two Dockerfile variants (docker-cpu.df and docker-gpu.df) but no documented entrypoint or clear instructions on how to build and run them. New contributors can't easily test the Docker setup. Adding an entrypoint.sh script that properly invokes enhance.py with argument forwarding, plus updating the Dockerfiles to use it, would make the project more accessible and catch Docker-specific issues early.
- [ ] Create entrypoint.sh that sources environment setup and passes arguments to enhance.py
- [ ] Update docker-cpu.df and docker-gpu.df to COPY entrypoint.sh and set ENTRYPOINT
- [ ] Add Docker build/run examples to README.rst with specific commands for CPU and GPU variants
- [ ] Test that 'docker run neural-enhance <image.jpg>' works end-to-end
Add integration tests for enhance.py with sample images
The repo lacks any test suite despite having a complex deep learning pipeline. enhance.py is the core executable but has no validation that it works with different image formats, sizes, or scaling factors. Adding a tests/ directory with integration tests using the example images in docs/ would catch regressions and help new contributors verify their changes work correctly.
- [ ] Create tests/ directory with init.py and test_enhance.py
- [ ] Write test_enhance_2x_scaling() that runs enhance.py on a small subset of docs/Faces_example.png
- [ ] Write test_enhance_preserves_format() that tests JPEG, PNG output formats
- [ ] Add pytest to requirements.txt and document running tests in README.rst
Create train/README.md documenting training pipeline and scripts
The train/ directory contains two shell scripts (ne1x-photo-repair.sh and ne2x-photo-default.sh) but zero documentation explaining what they do, what datasets they expect, or how contributors can customize training. This is a barrier to contributors wanting to extend or retrain models. A detailed README in the train/ directory with parameter explanations and example workflows would significantly improve maintainability.
- [ ] Create train/README.md explaining the two training scripts and their differences
- [ ] Document required dataset format, directory structure, and where to source training data
- [ ] Add sections for each script showing: command syntax, key parameters (learning rate, epochs, batch size), expected output artifacts, and training time estimates
- [ ] Include troubleshooting section for common Theano/Lasagne GPU/CPU issues
🌿Good first issues
- Add unit tests for image I/O in
enhance.py: test that Pillow correctly loads/saves .jpg, .png, .gif formats and that zoom factors 1, 2, 4 produce expected output dimensions. - Create a model registry / config file: replace hardcoded model paths in
enhance.pywith a YAML or JSON mapping of--modelnames to file paths, allowing users to easily swap pre-trained models without editing Python code. - Document the training pipeline: add comments to
train/ne2x-photo-default.shexplaining dataset preparation, Lasagne layer definitions, loss function choices, and how to evaluate model quality on a validation set.
⭐Top contributors
Click to expand
Top contributors
- @alexjc — 88 commits
- @dribnet — 4 commits
- @concutere — 1 commits
- @mtrgit — 1 commits
- @AlexeyKruglov — 1 commits
📝Recent commits
Click to expand
Recent commits
2fd67de— Merge pull request #108 from concutere/fix-load-model-absolute-path (alexjc)5248437— fix load_model to use absolute path (concutere)d1ab2b1— Merge pull request #94 from m-tr/fix/dockerfile-wget (alexjc)14e1734— Dockerfiles wget fix (mtrgit)b3a9ba5— Merge pull request #93 from AlexeyKruglov/fix-hist (alexjc)8e60e74— Fix histogram matching with scipy 0.17.0 (AlexeyKruglov)4ee13f5— Add ne4x pretrained weights to docker image (#85) (fchouteau)559e66b— Fix exception when training with JPEG compression. (alexjc)9d2aa3c— Fix histogram colors overflowing in certain images. Closes #73. (alexjc)904071d— Fix memory error in Docker by changing default tile size. Closes #74. (alexjc)
🔒Security observations
The codebase has critical security vulnerabilities stemming from a completely outdated and unmaintained dependency stack (Theano 0.8.2 from 2016, Lasagne, Python 2-era libraries). Both Theano and Lasagne are officially deprecated with no security support. The project lacks dependency pinning, version constraints, and hash verification. While the project itself (super-resolution image processing) doesn't appear to have web-facing components based on the visible file structure, the unmaintained dependencies pose significant risks including: lack of security patches, potential supply
- Critical · Outdated Theano Dependency with Known Vulnerabilities —
requirements.txt. Theano==0.8.2 is pinned to a version from 2016 that has not received security updates in over 7 years. Theano was deprecated and abandoned by the maintainers. This version likely contains multiple known vulnerabilities in its dependencies and core functionality. Fix: Migrate to a maintained deep learning framework such as TensorFlow, PyTorch, or JAX. If legacy code requires Theano, conduct a thorough security audit of the codebase and implement additional input validation and sandboxing. - Critical · Unversioned Lasagne Dependency from Git Commit —
requirements.txt. The Lasagne dependency is pinned to a specific git commit (61b1ad1) from an external repository without version constraints. This approach is fragile and may pull code from an unmaintained repository. Lasagne was also deprecated and is no longer actively maintained. Fix: Replace with a maintained alternative. If Lasagne is critical, pin to a stable, audited release version rather than a git commit hash. Consider migrating to modern deep learning frameworks with active security support. - High · Pillow Version Constraint Too Permissive —
requirements.txt. Pillow>=3.2.0 allows installation of any version from 3.2.0 onwards without an upper bound. Given the project's age (dependencies from 2016), this could allow installation of incompatible or vulnerable versions. Pillow has had multiple CVEs affecting image processing. Fix: Pin Pillow to a specific compatible version range (e.g., Pillow>=3.2.0,<10.0.0) after testing. Regularly audit and update to patched versions. Consider pinning to latest stable version compatible with the codebase. - High · No Dependency Hash Verification —
requirements.txt. The requirements.txt file lacks hash verification (--hash) for any dependencies. This leaves the project vulnerable to package repository compromises or man-in-the-middle attacks during dependency installation. Fix: Use pip-audit or similar tools to generate hashes. Update requirements.txt with --hash values for all dependencies. Example: 'Pillow==X.Y.Z --hash=sha256:...' - Medium · Unspecified Colorama Version —
requirements.txt. The colorama dependency has no version constraint specified, which could lead to installation of incompatible or unexpected versions. Fix: Specify an explicit version or version range for colorama (e.g., colorama>=0.4.0,<1.0.0) to ensure reproducible builds. - Medium · Deprecated Deep Learning Stack —
enhance.py, requirements.txt, entire codebase. The entire project is built on a deprecated and unmaintained deep learning stack (Theano + Lasagne from 2016). No security patches or updates are available from upstream maintainers. Fix: Consider a comprehensive refactoring to use maintained frameworks (TensorFlow 2.x, PyTorch, or similar). If that's not feasible, perform a security-focused code audit and implement strict input validation. - Low · Missing Security Configuration in Docker —
docker-cpu.df, docker-gpu.df. Docker files (docker-cpu.df, docker-gpu.df) are present but their contents are not visible in the analysis. They may lack security best practices such as non-root user execution, minimal base images, or security scanning. Fix: Review Dockerfiles for security best practices: use minimal base images (alpine/distroless), run as non-root user, minimize layers, scan with tools like Trivy or Grype, and avoid passing sensitive data in build arguments.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.