NVlabs/stylegan2
StyleGAN2 - Official TensorFlow Implementation
Stale — last commit 2y ago
weakest axisnon-standard license (Other); last commit was 2y ago…
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
last commit was 2y ago; no CI workflows detected
- ✓3 active contributors
- ✓Other licensed
- ✓Tests present
Show all 8 evidence items →Show less
- ⚠Stale — last commit 2y ago
- ⚠Small team — 3 contributors active in recent commits
- ⚠Single-maintainer risk — top contributor 89% of recent commits
- ⚠Non-standard license (Other) — review terms
- ⚠No CI workflows detected
What would change the summary?
- →Use as dependency Concerns → Mixed if: clarify license terms
- →Deploy as-is Mixed → Healthy if: 1 commit in the last 180 days
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Forkable" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/nvlabs/stylegan2)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/nvlabs/stylegan2 on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: NVlabs/stylegan2
Generated by RepoPilot · 2026-05-07 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/NVlabs/stylegan2 shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
WAIT — Stale — last commit 2y ago
- 3 active contributors
- Other licensed
- Tests present
- ⚠ Stale — last commit 2y ago
- ⚠ Small team — 3 contributors active in recent commits
- ⚠ Single-maintainer risk — top contributor 89% of recent commits
- ⚠ Non-standard license (Other) — review terms
- ⚠ No CI workflows detected
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live NVlabs/stylegan2
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/NVlabs/stylegan2.
What it runs against: a local clone of NVlabs/stylegan2 — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in NVlabs/stylegan2 | Confirms the artifact applies here, not a fork |
| 2 | License is still Other | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | Last commit ≤ 749 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of NVlabs/stylegan2. If you don't
# have one yet, run these first:
#
# git clone https://github.com/NVlabs/stylegan2.git
# cd stylegan2
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of NVlabs/stylegan2 and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "NVlabs/stylegan2(\\.git)?\\b" \\
&& ok "origin remote is NVlabs/stylegan2" \\
|| miss "origin remote is not NVlabs/stylegan2 (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Other)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Other\"" package.json 2>/dev/null) \\
&& ok "license is Other" \\
|| miss "license drift — was Other at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 749 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~719d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/NVlabs/stylegan2"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
StyleGAN2 is an official NVIDIA TensorFlow implementation of a style-based generative adversarial network that produces high-quality synthetic images through a novel architecture redesign. It addresses StyleGAN artifacts via adaptive instance normalization (AdaIN), progressive growing improvements, and path length regularization, enabling both superior image generation and reliable detection of AI-generated content. Monolithic single-repo structure: dnnlib/ contains the TensorFlow abstraction layer with custom ops in dnnlib/tflib/ops/; training/ holds the core GAN logic split across networks_stylegan2.py (architecture), training_loop.py (training harness), loss.py (objectives), and dataset.py (data pipeline); metrics/ implements evaluation suites (FID, IS, PPL); top-level scripts (run_training.py, run_generator.py, run_projector.py) expose the API as entry points.
👥Who it's for
ML researchers and engineers building generative image models who need a reference implementation of state-of-the-art GAN architecture; practitioners using pre-trained networks for image synthesis, inversion, or perceptual loss metrics; computer vision teams evaluating generative model quality via Inception Score and Fréchet Inception Distance metrics.
🌱Maturity & risk
Production-ready and well-established: this is the official paper implementation released by NVIDIA Research in 2019 (arXiv:1912.04958). The codebase is stable with comprehensive pre-trained network weights available, though active development has shifted to StyleGAN2-ADA-PyTorch (noted in README). TensorFlow implementation is mature but the ecosystem has evolved toward PyTorch variants.
Standard open source risks apply.
Active areas of work
This repository is in maintenance mode. The README explicitly directs users to StyleGAN2-ADA-PyTorch as the active development fork. No ongoing feature work is visible; the TensorFlow version serves as the authoritative reference implementation for the published paper but is not receiving regular updates.
🚀Get running
git clone https://github.com/NVlabs/stylegan2.git
cd stylegan2
pip install tensorflow-gpu==1.14 scipy requests
python dataset_tool.py create_from_images datasets/my-dataset ./my-images
python run_training.py --data-dir=datasets --config=config-f --dataset=my-dataset --mirror-augment=true
Daily commands:
Single-GPU training: python run_training.py --data-dir=datasets --config=config-f --dataset=cifar10. Multi-GPU (8× recommended): python run_training.py --data-dir=datasets --config=config-f --dataset=cifar10 --num-gpus=8. Image generation from checkpoint: python run_generator.py generate-images --network=pretrained.pkl --seeds=0-10. Compute metrics on generated images: python run_metrics.py --data-dir=datasets --config=config-f --dataset=cifar10 --metrics=fid50k.
🗺️Map of the codebase
- training/networks_stylegan2.py: Defines the StyleGAN2 Generator and Discriminator architectures with AdaIN normalization, mapping network, and progressive growing logic—the core innovation of the paper
- training/training_loop.py: Implements the main GAN training harness including progressive growing schedule, regularization (R1, path length), gradient accumulation, and checkpoint saving
- dnnlib/tflib/custom_ops.py: Wrapper for CUDA custom ops (fused_bias_act, upfirdn_2d) that provide critical performance optimizations for image upsampling and activation fusing
- training/loss.py: Defines the adversarial loss objectives (hinge loss), R1/R2 gradient penalties, and path length regularization that enforce generator invertibility
- metrics/frechet_inception_distance.py: Implements FID metric (primary quality evaluation method) using pre-trained InceptionV3 features and Fréchet distance on real vs generated distributions
- pretrained_networks.py: Downloads and caches pre-trained network weights from NVIDIA CDN, enabling zero-shot image generation and transfer learning without training from scratch
- training/dataset.py: Data loading pipeline supporting .tfrecords format with augmentation, progressive growing resolution scheduling, and memory-efficient batching
- dnnlib/tflib/ops/upfirdn_2d.cu: CUDA kernel implementing polyphase upsampling/downsampling with arbitrary filters (critical for StyleGAN2's high-quality resampling)
🛠️How to make changes
Architecture changes: edit training/networks_stylegan2.py (Generator/Discriminator classes inheriting from dnnlib.tflib.network.Network). Loss tweaks: modify training/loss.py (G_loss, D_loss functions). Data pipeline: extend training/dataset.py (Dataset class). Custom metrics: add new file in metrics/ following metric_base.py pattern and register in metrics/metric_defaults.py. Training hyperparameters: pass CLI flags to run_training.py or edit dnnlib/submission/run_context.py defaults.
🪤Traps & gotchas
TensorFlow 1.x with eager execution disabled by default—many tflib functions assume graph mode, causing errors if you enable tf.enable_eager_execution(). CUDA compute capability must be ≥6.1 for custom ops; older GPUs (e.g., Maxwell) will fail at import. The pretrained_networks.py uses hardcoded URLs to NVIDIA CDN that may timeout; model downloads are multi-GB and require stable connectivity. Progressive growing requires matching --config flags (config-f, config-d, etc.) to dataset resolution; mismatches cause shape errors. Path length regularization sampling adds significant memory overhead and is disabled by default (--pl-weight=0)—enabling it may OOM on 11GB cards.
🔗Related repos
NVlabs/stylegan2-ada-pytorch— Official successor replacing TensorFlow with PyTorch, adding data augmentation (ADA) and modern training practices; the recommended fork for new projectsNVlabs/stylegan— Original StyleGAN paper implementation (2018); StyleGAN2 directly improves upon its architecture and training, useful for understanding the evolutionrosinality/style-based-gan-pytorch— Community PyTorch port of StyleGAN2 with cleaner code structure and faster iteration, widely used for research extensions and ablationsopenai/guided-diffusion— Complementary generative model (diffusion-based) often evaluated on same metrics (FID, IS) and used for comparison with StyleGAN2 in recent papersNVlabs/nvdiffrecmc— NVIDIA's neural rendering framework that uses StyleGAN2-generated images for 3D reconstruction; shows real-world deployment of this architecture
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add unit tests for dnnlib/tflib custom CUDA operations (fused_bias_act and upfirdn_2d)
The repo contains critical custom CUDA ops in dnnlib/tflib/ops/ (fused_bias_act.cu, upfirdn_2d.cu) and their Python bindings, but there are no visible test files validating their correctness, numerical stability, or gradient computation. This is high-risk code that should have comprehensive tests to catch platform-specific issues and regressions.
- [ ] Create tests/test_fused_bias_act.py with tests for forward/backward passes, different data types, and edge cases
- [ ] Create tests/test_upfirdn_2d.py validating filtering correctness against reference implementations
- [ ] Add tests/conftest.py for shared TensorFlow session/device fixtures
- [ ] Verify tests run with and without GPU to catch CUDA compilation issues early
Add comprehensive integration tests for training/networks_stylegan2.py synthesis/mapping network
The core StyleGAN2 architecture is defined in training/networks_stylegan2.py but there are no visible tests verifying network construction, layer shapes, progressive growing stages, or style mixing correctness. Given this is the heart of the model, shape/connectivity regressions could go undetected.
- [ ] Create tests/test_stylegan2_network.py with fixtures for discriminator and generator instantiation
- [ ] Add tests validating output shapes through progressive growing stages (resolutions 4→8→16→...→1024)
- [ ] Test style mixing: inject different latent codes at different layers and verify outputs differ appropriately
- [ ] Verify learnable constant input tensors initialize correctly in mapping network
Add integration tests for training/dataset.py data loading pipeline with assertions on batch properties
The dataset.py module handles tfrecord loading, augmentation, and batching—critical for training reproducibility—but has no visible tests validating batch shapes, value ranges, or augmentation application. Bugs here cause silent training failures or poor convergence.
- [ ] Create tests/test_dataset.py with fixtures loading the example tfrecord formats documented in dataset_tool.py
- [ ] Add assertions validating batch tensor shapes match config (batch_size, resolution, channels)
- [ ] Test augmentation pipeline preserves value ranges and doesn't corrupt images (e.g., verify mean/std post-augmentation)
- [ ] Add regression tests for edge cases: single-image batches, resolution mismatches, missing tfrecord files
🌿Good first issues
- Add unit tests for dnnlib/tflib/tfutil.py functions (run_in_default_session, assert_shape, etc.) which lack test coverage—create tests/test_tfutil.py with pytest mocking of TF session ops
- Document the complete training hyperparameter grid (learning rates, batch sizes, progressive growth schedule) used in each config preset (config-a through config-f) in a structured table in docs/configs.md, with references to paper Table 2
- Implement missing .pkl → ONNX/SavedModel export function in a new module training/export.py to enable deployment without TensorFlow dependency—reference pretrained_networks.py for weight loading patterns
⭐Top contributors
Click to expand
Top contributors
- @tkarras — 17 commits
- @nurpax — 1 commits
- @jannehellsten — 1 commits
📝Recent commits
Click to expand
Recent commits
bf0fe0b— Update README.md (nurpax)f2f751c— Update versions.html for StyleGAN3 (tkarras)81ae61d— Alias-Free GAN links (tkarras)315ac54— Add link to stylegan2-ada-pytorch (tkarras)23f8bed— Workaround for NCCL bug in TF 1.15 (tkarras)6af5afc— Add links to stylegan2-ada (tkarras)dae148b— Remove direct links to network pickles. (tkarras)ca52625— Update S3 links. (tkarras)ec1901f— Update citation. (tkarras)cec605e— Convert seed 'range' into a list (jannehellsten)
🔒Security observations
The StyleGAN2 codebase has significant security concerns primarily stemming from severely outdated dependencies in the Dockerfile. Tensor
- High · Outdated TensorFlow Base Image with Known Vulnerabilities —
Dockerfile (line: FROM tensorflow/tensorflow:1.14.0-gpu-py3). The Dockerfile uses tensorflow/tensorflow:1.14.0-gpu-py3, which is severely outdated (released in 2019). TensorFlow 1.14.0 has multiple known CVEs including remote code execution vulnerabilities. This image is no longer maintained and lacks security patches. Fix: Update to a supported TensorFlow version (2.x LTS or latest stable). Use tensorflow/tensorflow:2.13-gpu-py3 or newer for security patches and updates. - High · Vulnerable Dependencies with Known CVEs —
Dockerfile (pip install commands). Multiple pinned dependencies have known security vulnerabilities: requests==2.22.0 (CVE-2020-26137), Pillow==6.2.1 (multiple CVEs including CVE-2020-10379), and scipy==1.3.3. These versions are outdated and lack security patches. Fix: Update all dependencies to latest stable versions: requests>=2.31.0, Pillow>=10.0.0, scipy>=1.11.0. Use 'pip install --upgrade' with security advisories checked. - Medium · Missing Security Headers and Best Practices in Dockerfile —
Dockerfile. The Dockerfile lacks security best practices such as: running as non-root user, using specific digest hashes instead of tags, multi-stage builds, and security scanning directives. The base image runs with root privileges by default. Fix: Add USER instruction to run as non-root, use image digests for reproducibility, implement layer caching optimization, and add HEALTHCHECK. Example: 'USER 1000:1000' and use 'tensorflow/tensorflow:2.13-gpu-py3@sha256:...' format. - Medium · Potential Arbitrary Code Execution via Custom CUDA Operations —
dnnlib/tflib/ops/*.cu and corresponding .py files. The codebase contains custom CUDA code (fused_bias_act.cu, upfirdn_2d.cu) that is compiled at runtime. If the compilation process or input validation is insufficient, this could lead to arbitrary code execution or buffer overflows. Fix: Ensure all custom CUDA code is thoroughly reviewed and tested. Implement input validation and bounds checking. Use compiler security flags (-fstack-protector-strong, etc.). Consider using pre-compiled trusted binaries instead of runtime compilation. - Medium · Unsafe File Operations in Dataset and Training Modules —
dataset_tool.py, training/dataset.py, projector.py. Files like dataset_tool.py, training/dataset.py, and projector.py likely perform file I/O operations. Without visible input validation, these could be vulnerable to path traversal attacks if processing untrusted file paths or dataset specifications. Fix: Implement strict input validation for all file paths. Use os.path.abspath() and verify paths are within expected directories. Sanitize user-supplied filenames. Reject paths containing '..' or absolute paths not in whitelist. - Low · Model Deserialization Security Risks —
pretrained_networks.py, run_generator.py. The codebase uses TensorFlow models and likely includes model loading (run_generator.py, pretrained_networks.py). Deserialization of untrusted model files can lead to arbitrary code execution. Fix: Only load pre-trained models from trusted sources. Verify model integrity using checksums/signatures. Implement sandboxing for model loading. Validate model format before deserialization. - Low · Network Request Security in Metrics and Pretrained Models —
pretrained_networks.py, metrics/ modules. requests==2.22.0 is used to fetch resources (likely pre-trained models from URLs). Without proper certificate verification and timeout handling, this could expose the system to MITM attacks or DoS attacks. Fix: Update requests to latest version. Always use verify=True for SSL certificate verification. Implement request timeouts. Use HTTPS only. Validate downloaded file checksums against known-good values.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.