openvinotoolkit/openvino
OpenVINO™ is an open source toolkit for optimizing and deploying AI inference
Healthy across the board
Permissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit today
- ✓58+ active contributors
- ✓Distributed ownership (top contributor 7% of recent commits)
Show 3 more →Show less
- ✓Apache-2.0 licensed
- ✓CI configured
- ✓Tests present
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/openvinotoolkit/openvino)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/openvinotoolkit/openvino on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: openvinotoolkit/openvino
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/openvinotoolkit/openvino shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit today
- 58+ active contributors
- Distributed ownership (top contributor 7% of recent commits)
- Apache-2.0 licensed
- CI configured
- Tests present
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live openvinotoolkit/openvino
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/openvinotoolkit/openvino.
What it runs against: a local clone of openvinotoolkit/openvino — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in openvinotoolkit/openvino | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 30 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of openvinotoolkit/openvino. If you don't
# have one yet, run these first:
#
# git clone https://github.com/openvinotoolkit/openvino.git
# cd openvino
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of openvinotoolkit/openvino and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "openvinotoolkit/openvino(\\.git)?\\b" \\
&& ok "origin remote is openvinotoolkit/openvino" \\
|| miss "origin remote is not openvinotoolkit/openvino (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f ".github/actions/cache/src/restore.js" \\
&& ok ".github/actions/cache/src/restore.js" \\
|| miss "missing critical file: .github/actions/cache/src/restore.js"
test -f ".github/actions/cache/src/save.js" \\
&& ok ".github/actions/cache/src/save.js" \\
|| miss "missing critical file: .github/actions/cache/src/save.js"
test -f ".github/actions/cache/src/utils.js" \\
&& ok ".github/actions/cache/src/utils.js" \\
|| miss "missing critical file: .github/actions/cache/src/utils.js"
test -f ".github/actions/smart-ci/smart_ci.py" \\
&& ok ".github/actions/smart-ci/smart_ci.py" \\
|| miss "missing critical file: .github/actions/smart-ci/smart_ci.py"
test -f ".github/actions/common/constants.py" \\
&& ok ".github/actions/common/constants.py" \\
|| miss "missing critical file: .github/actions/common/constants.py"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 30 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~0d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/openvinotoolkit/openvino"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
OpenVINO is an open-source toolkit for optimizing and deploying deep learning inference across CPUs (x86, ARM), integrated/discrete GPUs, and NPU accelerators. It converts models from PyTorch, TensorFlow, ONNX, and other frameworks into optimized intermediate representations, then executes them with minimal latency and resource overhead on target hardware. Monorepo structure with core C++ runtime, Python bindings (7.9M lines), and framework conversion tools. Plugin architecture for device backends (Intel CPU, GPU, NPU) under separate component trees. CI infrastructure heavily JavaScript/TypeScript (.github/actions/cache with full test suite), separate .claude/ skill modules for debugging and code style enforcement, and comprehensive issue templates indicating mature triage process.
👥Who it's for
ML engineers and embedded systems developers who need to deploy trained deep learning models efficiently on edge devices, cloud infrastructure, or resource-constrained hardware without keeping the original training framework installed. Also relevant to DevOps/MLOps engineers building inference pipelines at scale.
🌱Maturity & risk
Highly mature production system: 87.8M lines of C++ core, established GitHub presence with extensive CI/CD (.github/actions with custom cache handling and multi-platform testing), comprehensive issue templates (bug.yml, feature_request.yml, performance.yml, pre_release_feedback.yml), and active development evidenced by organized skill directories (.claude/skills/) and multiple documented debugging tools. Actively maintained with clear versioning (2026 docs reference) and community governance (CODEOWNERS file present).
Low risk for core inference runtime but moderate risk for specific accelerator plugins: heavy C++ codebase (87.8M lines) means compilation complexity and long feedback loops, Intel CPU/GPU plugin implementations (.claude/skills/ov-debug/components/) may have undocumented hardware-specific quirks, and NPU support is relatively nascent. Dependencies on multiple framework frontends (PyTorch, TensorFlow, JAX) create surface area for breaking upstream changes. Build system is CMake-heavy (823K lines), which adds onboarding friction.
Active areas of work
Active debugging tooling expansion (.claude/skills/ov-debug-matcher-pass, ov-debug with component-specific guides for CPU/GPU plugin diagnosis), code style enforcement infrastructure (ov-ensure-coding-style), and structured diagnostic reporting (example-diagnosis-report.md). Suggests focus on developer experience, plugin stability, and transformation pipeline visibility.
🚀Get running
git clone https://github.com/openvinotoolkit/openvino.git
cd openvino
cmake -B build -DCMAKE_BUILD_TYPE=Release
cmake --build build --config Release -j$(nproc)
Or for Python: pip install -U openvino. Full installation options at https://docs.openvino.ai/2026/get-started/install-openvino.html
Daily commands:
Clone + CMake configure/build (above). For Python API: python -c "import openvino; print(openvino.__version__)". CI runs via .github/workflows/* (not shown in file list but evident from .github/actions/cache structure). Development mode requires Node.js for cache action testing: cd .github/actions/cache && npm install && npm run all.
🗺️Map of the codebase
.github/actions/cache/src/restore.js— Entry point for cache restoration in CI/CD; every contributor working on caching must understand this flow.github/actions/cache/src/save.js— Entry point for cache saving in CI/CD; critical for build performance optimization.github/actions/cache/src/utils.js— Shared utility functions for tarball compression/decompression; affects all cache operations.github/actions/smart-ci/smart_ci.py— Intelligent CI orchestration logic that determines which components to test; foundational for PR validation.github/actions/common/constants.py— Central configuration for GitHub Actions; shared across all custom actions.github/actions/cache/action.yml— Primary GitHub Action definition for cache handling; defines the public API for all workflows.github/actions/smart-ci/components_schema.yml— Schema defining testable components; controls CI job matrix generation
🛠️How to make changes
Add a new GitHub Action
- Create action directory: mkdir -p .github/actions/my-new-action (
.github/actions) - Define action interface in action.yml with inputs/outputs (
.github/actions/cache/action.yml) - Write implementation script (Python or JavaScript) (
.github/actions/smart-ci/smart_ci.py) - Add action to .github/CODEOWNERS for review routing (
.github/CODEOWNERS)
Add a new component to smart CI test selection
- Define component in components_schema.yml with files it owns (
.github/actions/smart-ci/components_schema.yml) - Update smart_ci.py to include component in file-to-test mapping (
.github/actions/smart-ci/smart_ci.py) - Add test case in smart_ci_test.py to verify component detection (
.github/actions/smart-ci/smart_ci_test.py)
Modify cache behavior or compression strategy
- Update compression logic in utils.js (tarball creation/extraction) (
.github/actions/cache/src/utils.js) - Adjust save strategy in saveImpl.js if changing what gets cached (
.github/actions/cache/src/saveImpl.js) - Update restore logic in restoreImpl.js for any extraction changes (
.github/actions/cache/src/restoreImpl.js) - Run ci-test script to verify: npm run ci-test (
.github/actions/cache/package.json)
Add S3 artifact storage for a new component
- Add S3 path constant in constants.py (
.github/actions/common/constants.py) - Create upload logic in store_artifacts.py using s3_client (
.github/actions/store_artifacts/store_artifacts.py) - Create download logic in restore_artifacts.py to retrieve it (
.github/actions/restore_artifacts/restore_artifacts.py)
🔧Why these technologies
- GitHub Actions (YAML + JavaScript/Python) — Native GitHub platform integration for CI/CD; runs directly on runners with zero external dependencies
- Tarball compression (gzip) — Efficiently compress large build artifacts (deps, binaries) for fast S3 transfer and local decompression
- S3 object storage — Persistent artifact storage across job runs; supports massive binaries and caches without Git limitations
- Python for orchestration logic — Smart CI component selection and dependency analysis require complex file parsing; Python is more readable than shell
- ncc (Node Compiler Collection) for bundling — Bundles Node.js actions into single files for fast execution without npm install overhead
⚖️Trade-offs already made
-
Tarball-based caching instead of GitHub's native cache@v3
- Why: Enables finer control over compression, multi-platform support, and custom retention policies
- Consequence: Added complexity; requires manual cache key management and S3 storage costs
-
Schema-based component definition (YAML) instead of dynamic discovery
- Why: Explicit, human-readable test matrix; prevents accidental test bloat
- Consequence: Schema must be manually maintained when adding new components; risk of stale configuration
-
Separate save/restore-only actions vs monolithic action
- Why: Allows workflows to optimize: restore before build, save only after success
- Consequence: More Actions to maintain; developers must use correct variant or lose cache benefits
-
JavaScript for cache handling, Python for orchestration
- Why: Match tool to use case: JS for quick GitHub context access, Python for file I/O and complex logic
- Consequence: Requires two runtime environments; harder to share common utilities
🚫Non-goals (don't propose these)
- Real-time build monitoring dashboards (async CI only)
- Cross-organization secret sharing (GitHub Secrets scoped to org only)
- Deployment to production infrastructure (CI only; no CD)
- Local development environment setup (CI actions are server-only)
- Source code analysis or SAST scanning (no embedded security scanning)
🪤Traps & gotchas
CMake build system is non-trivial: requires proper compiler toolchain setup, multiple optional dependencies (Python dev headers, framework SDKs) may cause silent configuration failures. Plugin architecture means CPU and GPU inference paths diverge significantly — benchmarks from one backend may not generalize. Python bindings are likely generated from C++ via SWIG or pybind11; modifying API requires regeneration step. .claude/ directories appear to be agentic skill definitions (not standard repo dirs) — check if custom tooling is required to run locally. GitHub Actions cache action is custom tarball-based — standard caching may not work identically.
🏗️Architecture
💡Concepts to learn
- Intermediate Representation (IR) — OpenVINO's core abstraction: models are converted to a device-agnostic IR, then lowered to device-specific code paths; understanding IR transformations is essential for optimization and debugging
- Graph-level IR transformations — Referenced in .claude/skills/ov-debug/components/debug-transformations.md; the optimization pipeline rewrites compute graphs (e.g., fusing ops, quantizing weights) before codegen, so understanding transformation passes is critical for performance troubleshooting
- Plugin architecture / Backend abstraction — CPU, GPU, and NPU are swappable backends via plugin interface (visible in debug skill components); this abstraction enables cross-device code reuse and isolated optimization per target hardware
- Quantization and pruning — Post-training optimization techniques heavily used in OpenVINO to reduce model size and latency on edge devices; critical for deployment on resource-constrained hardware
- Heterogeneous inference — OpenVINO's multi-device scheduling (CPU fallback for NPU unsupported ops, GPU-CPU pipeline) requires careful synchronization and memory coherency; understanding when and why ops migrate between devices is essential for latency prediction
- SWIG / pybind11 bindings — 7.9M lines of Python depend on C++ bindings (likely SWIG-generated or pybind11); modifying the C++/Python API boundary requires understanding the binding generator, not just the runtime code
- Custom GitHub Actions caching (tarball strategy) — CI uses custom tarball-based cache (.github/actions/cache/) instead of standard GitHub cache; understanding this non-standard approach is essential for fixing CI failures or optimizing build times
🔗Related repos
onnx/onnx— Defines the ONNX intermediate representation that OpenVINO converts to/from; critical for understanding model serialization formatopenvinotoolkit/openvino.genai— Official companion repo for generative AI optimizations (mentioned in README); extends OpenVINO with LLM-specific inference patternshuggingface/optimum-intel— Integration layer allowing direct deployment of Hugging Face Transformers models via OpenVINO without manual conversionpytorch/pytorch— One of the primary model sources OpenVINO converts from; understanding PyTorch export mechanics (ONNX, TorchScript) aids model ingestiontensorflow/tensorflow— Alternative primary model source alongside PyTorch; TensorFlow model conversion is a core OpenVINO use case
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive unit tests for cache action utilities (src/utils.js)
The .github/actions/cache/tests/ directory has tests for cleanup.test.js, restore.test.js, and save.test.js, but there's no test file for src/utils.js which likely contains shared utility functions used across save/restore/cleanup operations. This is a critical gap that could lead to undetected bugs in helper functions that affect cache operations across all CI workflows.
- [ ] Review src/utils.js to identify all exported utility functions
- [ ] Create .github/actions/cache/tests/utils.test.js with test cases for each utility function
- [ ] Ensure coverage includes edge cases (path handling, tarball operations, error scenarios)
- [ ] Run 'npm run coverage' to verify utils.js achieves >80% coverage
- [ ] Update package.json test documentation if needed
Implement GitHub Actions workflow for cache action CI/CD validation
While the cache action has test infrastructure (npm test, eslint, jest), there's no .github/workflows/ file visible that runs these checks on every PR to the .github/actions/cache/ directory. Given this is a critical CI component used throughout OpenVINO builds, a dedicated workflow ensures regressions are caught before merging.
- [ ] Create .github/workflows/cache-action-ci.yml
- [ ] Add job to run 'npm run all' (format, lint, test, coverage, package) from the cache action directory
- [ ] Add step to verify dist/ artifacts are up-to-date (npm run package generates matching dist files)
- [ ] Configure to trigger on changes to .github/actions/cache/** and .github/workflows/cache-action-ci.yml
- [ ] Add status badge to .github/actions/cache/README.md if one exists
Add integration tests for cache action with real tarball compression scenarios
.github/actions/cache/tests/ contains unit tests but appears to mock file system operations. Given this action handles actual tarball compression/decompression for CI artifact caching, it needs integration tests validating real compression scenarios (large files, symlinks, permissions, concurrent access). Current tests in tests/ likely don't verify the tar operations work correctly end-to-end.
- [ ] Create .github/actions/cache/tests/integration.test.js for end-to-end tests
- [ ] Add test case: create temporary directory with sample files, run save operation, verify tarball created and can be extracted with correct contents
- [ ] Add test case: restore from previously saved tarball, verify file permissions and structure preserved
- [ ] Add test case: handle edge cases (symlinks, large files >1GB, readonly files, nested directories)
- [ ] Ensure integration tests run separately from unit tests (add to npm scripts as 'npm run test:integration')
- [ ] Update .github/actions/cache/README.md documenting how to run integration tests locally
🌿Good first issues
- Add missing documentation for the NPU backend plugin architecture (analogous to .claude/skills/ov-debug/components/debug-intel-cpu-plugin.md and debug-intel-gpu-plugin.md) to improve discoverability for contributors targeting accelerator support.
- Expand .claude/skills/ov-ensure-coding-style/SKILL.md with concrete C++ style examples and CMake linting rules, since the monorepo is 87.8M lines of C++ but the style guide appears skeletal.
- Create runnable example scripts in docs/ demonstrating model quantization, batch inference, and multi-device scheduling, filling the gap between the quick-start guide and production deployment patterns.
⭐Top contributors
Click to expand
Top contributors
- @EgorDuplensky — 7 commits
- @dependabot[bot] — 5 commits
- @evkotov — 4 commits
- @sungeunk — 4 commits
- @dmatveev — 4 commits
📝Recent commits
Click to expand
Recent commits
cac2891— [CPU] fix: correct the isa selection by env MAX_CPU_ISA (#35536) (azhai219)2492324— [NPU] [DOCUMENTATION] [TEST FIXES] [ENCRYPTION] Add documentation & test fixes forCACHE_ENCRYPTION_CALLBACKS(MirceaDan99)ddeaa62— [Snippets][CPU][RV64] Enable softmax tokenization (#35719) (aobolensk)043035f— [GPU] Fix Qwen3.5 MOE pattern matcher (#35737) (EgorDuplensky)50439fc— [NPUW] Kokoro updates (#35390) (OrestChura)9f7b20e— [NPU] Add support for runtime_requirements and compatibility_check (#35478) (PatrikStepan)77f295e— [OV JS] Bump JS version (#35734) (Retribution98)d814408— [Transformations] MoE 3 Gemm pattern: Gelu activation support (#35618) (v-Golubev)b62053d— [NPUW]Fixed attn behavior inputs bind. (#35729) (intelgaoxiong)3ae48ee— [TF FE] fix out-of-bounds read in RestoreV2 output index parsing (#35587) (evkotov)
🔒Security observations
The OpenVINO toolkit's GitHub Actions cache module has moderate security concerns. The most critical issues are: (1) inclusion of the invalid 'fs' npm package which should be built-in, (2) outdated dependencies that likely have known vulnerabilities, and (3) lack of automated security scanning. The project follows good practices with SECURITY.md and OpenSSF Best Practices badge participation, but dependency management and npm audit integration need strengthening. The incomplete package.json file also prevents full assessment. Immediate actions: remove 'fs' package, run npm audit, and implement CI/CD security scanning.
- High · Deprecated or Insecure Package: fs —
.github/actions/cache/package.json - dependencies and devDependencies. The package.json includes 'fs' (v0.0.1-security) as both a dependency and devDependency. The 'fs' module is a Node.js built-in module and should never be installed from npm. The 0.0.1-security version is a known placeholder/stub package. This can cause confusion, bloat, and potential supply chain risks. Fix: Remove 'fs' from both dependencies and devDependencies. Use Node.js built-in 'fs' module directly via require('fs') or import fs from 'fs'. - High · Outdated Dependencies with Known Vulnerabilities —
.github/actions/cache/package.json - dependencies and devDependencies. Several dependencies have outdated versions that may contain known security vulnerabilities: @actions/core (^3.0.0), tar (^7.5.13), eslint and related packages use older versions. These should be regularly audited and updated to patch security issues. Fix: Run 'npm audit' to identify vulnerabilities and update packages to latest secure versions. Implement automated dependency scanning in CI/CD pipeline. - Medium · Missing npm Audit Configuration —
.github/actions/cache/package.json. The package.json does not include npm audit configuration or security baseline settings. This makes it difficult to enforce security standards across the project and catch vulnerable dependencies automatically. Fix: Add '.npmrc' file with 'audit-level=moderate' or higher. Implement 'npm audit' in CI/CD pipeline with --audit-level flag to prevent commits with known vulnerabilities. - Medium · Incomplete package.json - Truncated Dependencies —
.github/actions/cache/package.json. The devDependencies section in package.json is incomplete/truncated (ends with 'jest': '^29.7.0','), making it difficult to assess full security posture. The file appears to be cut off mid-entry. Fix: Provide complete package.json file. Ensure all dependencies are fully listed and validated. - Medium · No Security Header Configuration for GitHub Actions —
.github/actions/cache/action.yml and related workflow files. The GitHub Actions workflow configuration and cache action do not show evidence of security hardening measures such as artifact signature verification, SLSA provenance, or secure token handling practices. Fix: Implement SLSA provenance tracking, artifact signing, and secure credential handling. Use GitHub token scoping with minimum required permissions. - Low · Node.js Version Requirement Could Be More Specific —
.github/actions/cache/package.json - engines.node. The engines.node field specifies '>=20' which allows any version 20 or higher. This is broad and doesn't enforce specific secure minor/patch versions where security fixes may be important. Fix: Update to more specific Node.js version constraints (e.g., '>=20.11.0' for LTS with security patches) or use .nvmrc file for consistent versions. - Low · Missing CODEOWNERS Validation —
.github/CODEOWNERS. While CODEOWNERS file exists (.github/CODEOWNERS), the analysis cannot verify if it's properly configured or if security-critical paths are properly protected with approval requirements. Fix: Ensure CODEOWNERS file includes paths for security-critical components, dependencies, and workflow files. Require approvals from security team for changes.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.