RepoPilotOpen in app →

Unity-Technologies/ml-agents

The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

Healthy

Healthy across the board

worst of 4 axes
Use as dependencyConcerns

non-standard license (Other)

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 1d ago
  • 27+ active contributors
  • Distributed ownership (top contributor 30% of recent commits)
Show 4 more →
  • Other licensed
  • CI configured
  • Tests present
  • Non-standard license (Other) — review terms
What would change the summary?
  • Use as dependency ConcernsMixed if: clarify license terms

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/unity-technologies/ml-agents)](https://repopilot.app/r/unity-technologies/ml-agents)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/unity-technologies/ml-agents on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: Unity-Technologies/ml-agents

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/Unity-Technologies/ml-agents shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit 1d ago
  • 27+ active contributors
  • Distributed ownership (top contributor 30% of recent commits)
  • Other licensed
  • CI configured
  • Tests present
  • ⚠ Non-standard license (Other) — review terms

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live Unity-Technologies/ml-agents repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/Unity-Technologies/ml-agents.

What it runs against: a local clone of Unity-Technologies/ml-agents — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in Unity-Technologies/ml-agents | Confirms the artifact applies here, not a fork | | 2 | License is still Other | Catches relicense before you depend on it | | 3 | Default branch develop exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 31 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>Unity-Technologies/ml-agents</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of Unity-Technologies/ml-agents. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/Unity-Technologies/ml-agents.git
#   cd ml-agents
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of Unity-Technologies/ml-agents and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "Unity-Technologies/ml-agents(\\.git)?\\b" \\
  && ok "origin remote is Unity-Technologies/ml-agents" \\
  || miss "origin remote is not Unity-Technologies/ml-agents (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Other)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Other\"" package.json 2>/dev/null) \\
  && ok "license is Other" \\
  || miss "license drift — was Other at generation time"

# 3. Default branch
git rev-parse --verify develop >/dev/null 2>&1 \\
  && ok "default branch develop exists" \\
  || miss "default branch develop no longer exists"

# 4. Critical files exist
test -f ".github/workflows/pytest.yml" \\
  && ok ".github/workflows/pytest.yml" \\
  || miss "missing critical file: .github/workflows/pytest.yml"
test -f "DevProject/Packages/manifest.json" \\
  && ok "DevProject/Packages/manifest.json" \\
  || miss "missing critical file: DevProject/Packages/manifest.json"
test -f ".pre-commit-config.yaml" \\
  && ok ".pre-commit-config.yaml" \\
  || miss "missing critical file: .pre-commit-config.yaml"
test -f ".yamato/com.unity.ml-agents-test.yml" \\
  && ok ".yamato/com.unity.ml-agents-test.yml" \\
  || miss "missing critical file: .yamato/com.unity.ml-agents-test.yml"
test -f "DevProject/ProjectSettings/ProjectVersion.txt" \\
  && ok "DevProject/ProjectSettings/ProjectVersion.txt" \\
  || miss "missing critical file: DevProject/ProjectSettings/ProjectVersion.txt"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 31 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~1d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/Unity-Technologies/ml-agents"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Unity ML-Agents is a toolkit that bridges Unity game engines with deep reinforcement learning (PPO, SAC, MA-POCA) and imitation learning (BC, GAIL) algorithms via PyTorch. It lets developers train intelligent NPC agents, test game builds autonomously, and run multi-agent simulations by exchanging observations and actions between Unity C# environments and Python trainers over gRPC. Hybrid monorepo: com.unity.ml-agents (C# Unity package under /com.unity.ml-agents/) interfaces with Python trainers in /ml-agents/ and /ml-agents-envs/; /docs/ contains Jupyter notebooks for examples; .yamato/ orchestrates multi-platform CI testing (Linux, Mac, Windows); .github/workflows/ handles PyPI publication and documentation builds.

👥Who it's for

Game developers and AI researchers building 2D/3D/VR games who want to train controllable NPCs or run automated testing; researchers using Unity as a simulator environment for deep RL experiments; hobbyists prototyping agent behaviors without implementing algorithms from scratch.

🌱Maturity & risk

Production-ready and actively maintained. Large MIT-licensed codebase (1.8M C#, 1.4M Python LOC), 17+ example environments, comprehensive CI/CD via GitHub Actions (.github/workflows/) and Yamato (.yamato/), and regular releases. Recent test coverage workflows (.yamato/coverage_tests.metafile) and PyPI publishing (.github/workflows/publish_pypi.yaml) indicate ongoing development and stability focus.

Low-to-moderate risk for mature users: gRPC dependency on protobuf requires code generation (.yamato/protobuf-generation-test.yml), tight Unity version coupling (package versioning in .yamato/), and Python API stability changes across releases. Monitor release notes for breaking changes; large dependency graph across PyTorch, TensorFlow (for some examples), and NumPy increases attack surface.

Active areas of work

Active release cycle with published .github/workflows/ for pytest validation, PyPI Python API publishing, and Yamato nightly/integration testing (.yamato/training-int-tests.yml, .yamato/pytest-gpu.yml). Recent addition of .yamato/sonar-python-package.yml indicates code quality scanning focus; gym-interface-test.yml shows ongoing OpenAI Gym compatibility work.

🚀Get running

Clone and install Python dependencies: git clone https://github.com/Unity-Technologies/ml-agents.git && cd ml-agents && pip install -e ./ml-agents-envs && pip install -e ./ml-agents. For Unity side, add the com.unity.ml-agents package to your project manifest. Verify setup: mlagents-learn --help (Python trainer CLI) and test a bundled example environment.

Daily commands: For training: mlagents-learn config.yaml --run-id=my_run (trainer runs gRPC server, Unity client connects). For inference in-editor: load trained .onnx model via Unity Barracuda. For Python LLAPI: from mlagents_envs.environment import UnityEnvironment; env = UnityEnvironment(file_name=path_to_build) then step() in loop.

🗺️Map of the codebase

  • .github/workflows/pytest.yml — Core CI/CD pipeline that validates all Python agent training code and must pass before merging—defines test matrix for all supported environments.
  • DevProject/Packages/manifest.json — Declares all Unity package dependencies and local package references that the ML-Agents toolkit requires—any structural change impacts the build.
  • .pre-commit-config.yaml — Enforces code quality standards (linting, formatting) on all commits—every contributor must understand what checks block their PRs.
  • .yamato/com.unity.ml-agents-test.yml — Main Unity test suite configuration running on Yamato CI—defines how the C# agent runtime is validated across platforms.
  • DevProject/ProjectSettings/ProjectVersion.txt — Declares the minimum Unity editor version required—critical for understanding compatibility constraints and build reproducibility.
  • .github/workflows/publish_pypi.yaml — Release pipeline for the Python ml-agents package to PyPI—essential for understanding versioning and distribution workflows.

🛠️How to make changes

Add a new agent behavior test in the runtime suite

  1. Create a new C# test class in the runtime tests directory following the pattern of existing tests (DevProject/Assets/ML-Agents/Scripts/Tests/Runtime/AcademyTest/AcademyStepperTest.cs)
  2. Add test methods using Unity's [Test] attribute to verify academy step synchronization, agent observation collection, or policy execution (DevProject/Assets/ML-Agents/Scripts/Tests/Runtime/AcademyTest/AcademyStepperTest.cs)
  3. Ensure the test scene reference is added to the corresponding .unity test scene asset (DevProject/Assets/ML-Agents/Scripts/Tests/Runtime/AcademyTest/AcademyStepperTestScene.unity)
  4. Run the test through .yamato/com.unity.ml-agents-test.yml or locally via Unity Test Framework

Add a new external dependency to the Python package

  1. Update the package requirements specification (typically in a setup.py or pyproject.toml in the python-api package) (.github/workflows/pytest.yml)
  2. Add the dependency to the test matrix in the pytest workflow to ensure it passes on all supported Python versions (.github/workflows/pytest.yml)
  3. Verify the dependency is included in published wheels by checking the publish_pypi.yaml workflow (.github/workflows/publish_pypi.yaml)

Add a new editor or runtime code quality check

  1. Define the linting/formatting rule in the pre-commit configuration (.pre-commit-config.yaml)
  2. Add the rule to the search-and-replace configuration if applicable (e.g., reformatting issues) (.pre-commit-search-and-replace.yaml)
  3. Ensure the check is enforced in the CI/CD pipeline by running pre-commit in pytest.yml or a dedicated workflow

Add a new test category or platform to CI/CD

  1. Define the new test job in the appropriate Yamato or GitHub Actions workflow file (.yamato/com.unity.ml-agents-test.yml)
  2. Specify the Unity editor versions, platforms (Windows/Linux/Mac), and test assembly filters (.yamato/test_versions.metafile)
  3. Link the new job to the promotion or nightly pipeline if needed (.yamato/com.unity.ml-agents-promotion.yml)

🔧Why these technologies

  • Unity C# + MonoBehaviour — Provides tight integration with the Unity Editor and runtime; allows agents to interact directly with game physics, rendering, and scene management.
  • PyTorch (Python API) — Industry-standard deep learning framework used for training agent policies via reinforcement learning, imitation learning, and other methods outside Unity.
  • GitHub Actions + Yamato CI — GitHub Actions handles Python/library tests; Yamato integrates with Unity Test Framework for cross-platform editor and standalone build validation.
  • Protobuf (referenced in .yamato) — Enables efficient serialization between the C# runtime and Python training processes over gRPC or sockets.
  • Pre-commit hooks — Enforces consistent code style (black, isort, flake8) and prevents malformed commits before they reach CI.

⚖️Trade-offs already made

  • Separate C# runtime and Python training into two processes communicating via network protocols

    • Why: Decouples game development (C#/Unity) from ML research (Python/PyTorch), allowing each to scale independently and use domain-specific tools.
    • Consequence: Introduces communication latency and complexity; requires careful serialization of observations and actions; demands robust error handling in inter-process communication.
  • Use Yamato CI for Unity tests instead of only GitHub Actions

    • Why: Yamato is tightly integrated with Unity's testing infrastructure and can spawn actual editor instances; GitHub Actions cannot.
    • Consequence: Requires maintaining two CI configurations; build times longer but coverage more comprehensive for Unity-specific concerns.
  • Store test scenes and runtime code in a DevProject folder rather than a separate repository

    • Why: Keeps reference implementations and examples close to the package definition; simplifies onboarding for developers.
    • Consequence: Large monorepo; slower clones; harder to version independently; but stronger coupling ensures tests always match package state.

🚫Non-goals (don't propose these)

  • Not a real-time 3D graphics engine—delegates all rendering to Unity.
  • Does not provide distributed training infrastructure—focuses on single-machine or per-user training via Python API.
  • Not a visual/graphical model editor—configuration is code-first (C# agents, Python scripts) rather than drag-and-drop.
  • Does not support web-based training UI—training happens via Python CLI or notebooks, not a hosted web service.

🪤Traps & gotchas

Protobuf version pinning in requirements (regenerate stubs if changing proto schema via .yamato/protobuf-generation-test.yml). Unity version must match package declaration (e.g., 2021.3+ for recent releases). gRPC port conflicts if multiple trainers run locally on default 5004. ONNX export requires specific model architecture (not all PyTorch models serialize cleanly to Barracuda). Ray RLlib integration is optional; trainer.py has provider pattern but defaults to mlagents algorithms. Python 3.10+ may require specific PyTorch builds on some platforms.

🏗️Architecture

💡Concepts to learn

  • Proximal Policy Optimization (PPO) — Default on-policy RL algorithm in ML-Agents (ml-agents/mlagents/trainers/ppo/); understanding PPO's clipped loss and trust region is essential to tuning hyperparameters in trainer_config.yaml
  • Soft Actor-Critic (SAC) — Off-policy algorithm in ML-Agents for continuous control with entropy regularization; required for stable multi-agent competitive scenarios where exploration trade-off is critical
  • gRPC Protocol Buffers — ML-Agents uses gRPC for C#↔Python communication; you must regenerate .proto stubs when modifying agent/environment interface schema (.yamato/protobuf-generation-test.yml automates this)
  • Imitation Learning (Behavioral Cloning & GAIL) — ML-Agents supports BC and GAIL for learning from demonstrations; essential for scripted NPC behavior capture before reinforcement learning fine-tuning
  • Multi-Agent Cooperation and Competition — MA-POCA (Multi-Agent Partial Observability Cooperative) algorithm handles multi-agent scenarios; understanding centralized training with decentralized execution (CTDE) is crucial for team-based game environments
  • Curriculum Learning — ML-Agents natively supports task difficulty schedules defined in trainer_config.yaml; progressive environment complexity training is a core feature for complex behaviors
  • ONNX Model Serialization & Barracuda Inference — Trained PyTorch models export to ONNX then load into Unity via Barracuda for production NPC control; understanding this pipeline avoids model compatibility surprises
  • openai/gym — ML-Agents wraps Unity environments as gym.Env (see .yamato/gym-interface-test.yml); gym is the de facto standard for RL environment APIs
  • pytorch/pytorch — All neural network trainers in ml-agents/ use PyTorch for policy/value networks; core dependency for the learning loop
  • Unity-Technologies/Barracuda — ONNX inference engine for in-game model execution; trained ml-agents models export to Barracuda for real-time NPC inference without Python runtime
  • ray-project/ray — Ray RLlib can be integrated as a custom training plugin (see ml-agents/mlagents/trainers/plugins/); distributed training alternative to default mlagents algorithms
  • Unity-Technologies/unitydocs-examples — Companion repo with runnable C# example scenes; many ml-agents tutorials link to this for step-by-step Unity setup guides

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive unit tests for MLAgentsSettings configuration in Editor tests

The file structure shows MLAgentsSettingsTests.cs exists at DevProject/Assets/ML-Agents/Scripts/Tests/Editor/MLAgentsSettings/MLAgentsSettingsTests.cs, but there's no corresponding implementation file visible. The ML-Agents package likely has configuration settings that control training behavior, agent initialization, and sensor setup. New contributors can expand test coverage for edge cases like invalid configuration values, missing sensor definitions, and version compatibility issues that would prevent misconfigurations from reaching production training scenarios.

  • [ ] Locate the MLAgentsSettings.cs implementation file in the main ml-agents package (not DevProject)
  • [ ] Review existing MLAgentsSettingsTests.cs to identify gaps in test coverage
  • [ ] Add unit tests for: invalid configuration values, null/empty sensor lists, incompatible algorithm selections, and file I/O errors
  • [ ] Add integration tests verifying settings propagate correctly to Academy initialization
  • [ ] Document expected behavior in test comments and ensure tests follow xUnit conventions

Create GitHub Actions workflow for validating protobuf generation consistency

The file structure shows .yamato/protobuf-generation-test.yml exists, but there's no corresponding .github/workflows/protobuf-validation.yml. Since ML-Agents uses Protocol Buffers for communication between Unity and Python training environments, detecting proto file changes that weren't regenerated is critical. A GitHub Actions workflow would catch PRs where .proto files are modified but the generated Python/C# code wasn't updated, preventing runtime serialization failures.

  • [ ] Examine .yamato/protobuf-generation-test.yml to understand the protobuf validation approach
  • [ ] Create .github/workflows/protobuf-validation.yaml that runs on PRs touching any .proto files
  • [ ] Add steps to regenerate protobuf code and compare against committed files (fail if differences exist)
  • [ ] Include both Python and C# protobuf generation steps to match the dual-language nature of ML-Agents
  • [ ] Document in CONTRIBUTING.md how developers should regenerate protos before committing

Add GPU-specific unit tests for sensor data processing in the Python API

The file structure shows .yamato/pytest-gpu.yml exists for GPU testing infrastructure, but comprehensive GPU-specific tests for the sensor pipeline (CompressedSensor, VectorSensor, etc.) are likely incomplete. New contributors can add PyTorch-accelerated tests validating that sensor data preprocessing (normalization, compression, batching) produces identical results on CPU vs GPU, and that memory usage stays bounded. This is critical since training performance and reproducibility depend on consistent sensor handling across devices.

  • [ ] Locate the Python API sensor processing code (likely in ml-agents/mlagents/trainers/)
  • [ ] Review existing pytest.yml and pytest-gpu.yml to understand GPU test execution
  • [ ] Create new test file: ml-agents/tests/pytorch/test_sensors_gpu.py with tests for CompressedSensor, VectorSensor, and RaycastSensor on GPU
  • [ ] Add tests validating numerical stability: same input produces same output (within float precision) on CPU and CUDA
  • [ ] Add performance regression tests for sensor batching throughput on GPU to catch regressions in the .yamato/pytest-gpu.yml CI

🌿Good first issues

  • Add missing type hints to ml-agents/mlagents/trainers/sac/trainer.py (SAC algorithm file lacks full typing annotations compared to PPO); improves IDE support and contributor onboarding
  • Expand com.unity.ml-agents/Tests/Editor/ unit test coverage for Sensors/ components; many sensor subclasses (RaycastSensor, CameraSensor) lack dedicated test files despite being user-facing
  • Document the protobuf regeneration workflow in docs/ with shell commands; .yamato/protobuf-generation-test.yml exists but no user-facing guide explains how to modify .proto files and rebuild Python/C# stubs

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 7d80dbc — Preparing ML-Agents Release 4.0.3 (#6291) (Codengineer)
  • 874a3b6 — Docs improvements and feedback (#6289) (OliviaBayley)
  • 462b8cf — Bobd/fast enter play mode (#6288) (Codengineer)
  • fb893f1 — Release/4.0.2 (#6283) (Codengineer)
  • 7359962 — Fix deprecated instanceID API usages in com.unity.ml-agents (#6287) (Codengineer)
  • c0296f9 — Bobd/adding ml agents tests (#6280) (Codengineer)
  • acb3078 — Release/4.0.1 (#6265) (maryamziaa)
  • 316a48f — Remove deprecated parameters (#6278) (maryamziaa)
  • d43ab59 — Free disk space (#6277) (maryamziaa)
  • e0489b7 — Change test versions to trunk (#6273) (maryamziaa)

🔒Security observations

  • High · Outdated and Vulnerable setuptools Version — Dockerfile - RUN pip install setuptools==41.0.0. The Dockerfile pins setuptools to version 41.0.0, released in July 2019. This version is significantly outdated and contains known security vulnerabilities. setuptools 41.0.0 has multiple CVEs including arbitrary code execution vulnerabilities during package installation. Fix: Update setuptools to the latest stable version (>=70.0.0). Remove the version pin or use a recent version constraint like 'setuptools>=70.0.0' to ensure security updates are applied.
  • High · Insecure HTTP Package Installation — Dockerfile - RUN wget http://packages.cloud.google.com/apt/doc/apt-key.gpg. The Dockerfile uses insecure HTTP to download packages from Google Cloud SDK repository. The apt-key retrieval uses HTTP without HTTPS, making it vulnerable to man-in-the-middle (MITM) attacks where malicious packages could be injected. Fix: Switch to HTTPS for package downloads: 'wget https://packages.cloud.google.com/apt/doc/apt-key.gpg'. Always use HTTPS for retrieving GPG keys and packages.
  • High · Outdated CUDA and Ubuntu Base Image — Dockerfile - FROM nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04. The Dockerfile uses CUDA 10.2 (released in November 2019) and Ubuntu 18.04 (LTS ended April 2023). These images contain numerous known vulnerabilities. Ubuntu 18.04 is no longer receiving security updates. Fix: Update to a modern, actively maintained base image such as 'nvidia/cuda:12.x-cudnn-devel-ubuntu22.04' or 'ubuntu:22.04 LTS'. This ensures access to current security patches.
  • Medium · Missing Package Integrity Verification — Dockerfile - RUN wget http://security.ubuntu.com/ubuntu/pool/main/libx/libxfont/libxfont1_1.5.1-1ubuntu0.16.04.4_amd64.deb. The Dockerfile downloads and installs packages (like libxfont1) without verifying checksums or signatures. Additionally, the wget for libxfont1 uses HTTP instead of HTTPS, increasing MITM risk. Fix: Use HTTPS (https://security.ubuntu.com), verify package signatures, or leverage the package manager's built-in verification. Consider using the package manager directly instead of manual downloads: 'apt-get install libxfont1'.
  • Medium · Incomplete Dockerfile - Missing Security Context — Dockerfile - RUN mkdir /ml-age (incomplete) and missing USER directive. The Dockerfile creates a directory '/ml-age' with the comment '# checkout ml-agents for SHA' but the instruction is incomplete. Additionally, there's no specification of a non-root user, which means containers will run as root by default. Fix: Complete the dockerfile logic and add a non-root user for running the container. Add lines like: 'RUN useradd -m -u 1000 mluser' and 'USER mluser' before the entrypoint.
  • Medium · Missing Python Dependencies Lock File — Dependencies/ directory and Python package management. The provided Dependencies/Package file content is empty. There's no visible requirements.txt, poetry.lock, or pipfile.lock that would lock dependency versions. The Python API dependencies appear unmanaged, increasing supply chain risk. Fix: Implement dependency pinning using requirements.txt with exact versions, or use Poetry/pip-tools for reproducible builds. Pin all transitive dependencies, not just direct dependencies.
  • Low · Unnecessary apt-utils Installation — Dockerfile - RUN yes | unminimize. The Dockerfile uses 'yes | unminimize' which installs unnecessary documentation and utilities, increasing the attack surface and image size. Fix: Remove the unminimize command. Use a minimal base image and install only required packages. This reduces image size and attack surface.
  • Low · Missing HEALTHCHECK Directive — undefined. The Dockerfile lacks a HEALTHCHECK instruction, making it difficult to monitor container health in production environments. Fix: undefined

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · Unity-Technologies/ml-agents — RepoPilot