RepoPilotOpen in app →

containernetworking/cni

Container Network Interface - networking for Linux containers

Healthy

Healthy across all four use cases

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 5mo ago
  • 20 active contributors
  • Apache-2.0 licensed
Show all 7 evidence items →
  • CI configured
  • Tests present
  • Slowing — last commit 5mo ago
  • Concentrated ownership — top contributor handles 54% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/containernetworking/cni)](https://repopilot.app/r/containernetworking/cni)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/containernetworking/cni on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: containernetworking/cni

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/containernetworking/cni shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 5mo ago
  • 20 active contributors
  • Apache-2.0 licensed
  • CI configured
  • Tests present
  • ⚠ Slowing — last commit 5mo ago
  • ⚠ Concentrated ownership — top contributor handles 54% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live containernetworking/cni repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/containernetworking/cni.

What it runs against: a local clone of containernetworking/cni — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in containernetworking/cni | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 173 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>containernetworking/cni</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of containernetworking/cni. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/containernetworking/cni.git
#   cd cni
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of containernetworking/cni and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "containernetworking/cni(\\.git)?\\b" \\
  && ok "origin remote is containernetworking/cni" \\
  || miss "origin remote is not containernetworking/cni (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "SPEC.md" \\
  && ok "SPEC.md" \\
  || miss "missing critical file: SPEC.md"
test -f "libcni/api.go" \\
  && ok "libcni/api.go" \\
  || miss "missing critical file: libcni/api.go"
test -f "pkg/invoke/invoke.go" \\
  && ok "pkg/invoke/invoke.go" \\
  || miss "missing critical file: pkg/invoke/invoke.go"
test -f "pkg/types/types.go" \\
  && ok "pkg/types/types.go" \\
  || miss "missing critical file: pkg/types/types.go"
test -f "pkg/version/reconcile.go" \\
  && ok "pkg/version/reconcile.go" \\
  || miss "missing critical file: pkg/version/reconcile.go"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 173 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~143d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/containernetworking/cni"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

CNI (Container Network Interface) is a CNCF specification and Go library for writing plugins that configure network interfaces in Linux containers. It provides a standardized way for container runtimes (Docker, Kubernetes, containerd) to manage container networking—adding/removing interfaces and allocating/deallocating IP addresses—without implementing networking logic themselves. The repo contains the spec (SPEC.md), the libcni Go library for integration, and cnitool, a CLI for executing CNI plugins. Dual-purpose repo: libcni/ contains the core Go library (api.go, conf.go) that runtimes embed to call plugins; cnitool/ is a standalone CLI tool (cmd/{add,del,check,gc,status}.go) for manual testing. pkg/invoke/ handles plugin invocation details (delegate.go, args.go). Tests use Ginkgo v2 (libcni_suite_test.go, api_test.go). Documentation lives in Documentation/ and root-level markdown files (SPEC.md is the canonical spec). GitHub Actions workflows in .github/workflows/ manage CI and linting (golangci.yml configured).

👥Who it's for

Container runtime engineers and infrastructure maintainers who need to integrate container networking into their platforms (e.g., Kubernetes developers using CNI to plug in Calico, Flannel, or Weave). Plugin authors who implement specific networking solutions. DevOps teams choosing and deploying network plugins in containerized environments.

🌱Maturity & risk

Production-ready and widely adopted. The project is a mature CNCF project with a clear governance structure (GOVERNANCE.md, MAINTAINERS file), biweekly meetings, and rigorous security practices (CII Best Practices badge, OpenSSF Scorecard). CI is robust (.github/workflows/test.yaml, scorecard.yml) with Go 1.21+ support. However, the repo itself is relatively stable in churn—core API changes are infrequent, suggesting the spec is well-established and not rapidly evolving.

Low risk overall, but a few considerations: The spec defines the contract between runtimes and plugins, so breaking changes have ecosystem-wide impact (though CONVENTIONS.md and spec-upgrades.md suggest careful versioning). Minimal direct dependencies (only Ginkgo/Gomega for testing, Cobra for CLI, vishvananda/netns for namespace ops)—all well-maintained. Single-repo design means all changes flow through one review process. The plugin ecosystem (containernetworking/plugins) is separate, so security fixes here don't auto-update deployed plugins.

Active areas of work

The project is actively maintained but in a consolidation phase—focus is on stability and spec clarification rather than major features. The presence of RELEASING.md, DCO enforcement, and spec-upgrades.md suggests recent work on release processes and backwards compatibility. GitHub workflows (commands.yml, test.yaml) are actively run. No evidence of sweeping refactors, indicating the API is stable.

🚀Get running

Clone and build:

git clone https://github.com/containernetworking/cni.git
cd cni
make

Build the cnitool CLI:

make build

Run tests:

make test

Review CONTRIBUTING.md before opening PRs (requires DCO sign-off).

Daily commands: No server to run—CNI is a library and CLI tool. To test locally:

make test          # Run Go tests with Ginkgo
make build         # Build cnitool binary
./bin/cnitool --help  # Inspect CLI commands (add/del/check/status/gc)

The cnitool requires CNI plugins (from containernetworking/plugins) to be installed in /opt/cni/bin/ to function end-to-end.

🗺️Map of the codebase

  • SPEC.md — The CNI specification document that defines the contract for all plugins and implementations; essential for understanding the core protocol.
  • libcni/api.go — Main library API exposing CNI operations (Add, Del, Check, Status, GC); entry point for container runtimes integrating CNI.
  • pkg/invoke/invoke.go — Core invocation mechanism that discovers and executes CNI plugins as subprocesses; handles the plugin execution contract.
  • pkg/types/types.go — Defines the canonical Go types for CNI results and errors; all plugin output must conform to these structures.
  • pkg/version/reconcile.go — Handles CNI specification version negotiation between runtimes and plugins; critical for backward compatibility.
  • libcni/conf.go — Configuration file parsing and chain resolution; interprets the JSON NetworkConfig that drives plugin execution.
  • cnitool/main.go — CLI tool demonstrating the complete CNI workflow; reference implementation for using libcni in practice.

🛠️How to make changes

Add a new CNI operation type (like Status or GC)

  1. Define the operation method signature in libcni/api.go alongside existing Add/Del/Check methods (libcni/api.go)
  2. Add corresponding command wrapper in cnitool/cmd/ to expose via CLI (e.g., status.go, gc.go already exist) (cnitool/cmd/status.go)
  3. Update SPEC.md to document the new operation's environment variables, input/output contract (SPEC.md)
  4. Add integration tests in libcni/api_test.go verifying the operation against a mock plugin chain (libcni/api_test.go)

Support a new CNI specification version

  1. Create new type file at pkg/types/XYZ/types.go with version-specific struct definitions (pkg/types/100/types.go)
  2. Implement UnmarshalJSON and MarshalJSON for backward compatibility in new version file (pkg/types/100/types.go)
  3. Register version conversion logic in pkg/types/internal/convert.go to/from canonical types (pkg/types/internal/convert.go)
  4. Update pkg/version/reconcile.go to recognize and validate the new version in plugin capabilities (pkg/version/reconcile.go)
  5. Add test suite at pkg/types/XYZ/types_test.go with serialization round-trip tests (pkg/types/100/types_test.go)

Build a container runtime integration using libcni

  1. Import github.com/containernetworking/cni/libcni and github.com/containernetworking/cni/pkg/types (libcni/api.go)
  2. Create a CNIConfig instance with libcni.NewCNIConfig() pointing to the plugin directory (cnitool/cmd/add.go)
  3. Call CNIConfig.AddNetworkList() with RuntimeConf (containing container ID, netns path) for each container start (libcni/api.go)
  4. On container stop, call DelNetworkList() with the same RuntimeConf to clean up resources (libcni/api.go)
  5. Parse the returned types.Result to extract IP addresses and DNS for the container's resolv.conf (pkg/types/types.go)

Write a CNI plugin compatible with this library

  1. Use pkg/skel/skel.go as the plugin entry point, implementing CmdAdd, CmdDel, CmdCheck functions (pkg/skel/skel.go)
  2. Read configuration from stdin as JSON; unmarshal into pkg/types/types.NetConf (pkg/types/types.go)
  3. Read CNI_* environment variables via pkg/invoke/args.go patterns for containerID, netns, ifName (pkg/invoke/args.go)
  4. On success, return types.Result marshaled to JSON on stdout; on error return types.Error (pkg/types/types.go)

🔧Why these technologies

  • Subprocess invocation for plugins — Plugins are language-agnostic binaries; subprocess isolation provides security boundary and version independence
  • JSON stdin/stdout for plugin communication — Language-neutral serialization format; human-debuggable for troubleshooting; easy to implement in any language
  • Environment variables for context (CNI_*) — Avoids JSON bloat for repeated fields; standard Unix pattern; plugin auto-discovers container/namespace context
  • Multiple spec version types (020, 040, 100) — Maintains backward compatibility while allowing breaking changes; plugins can declare minimum required version
  • Cobra CLI for cnitool — Familiar plugin architecture for operations teams; mirrors Kubernetes kubectl command structure

⚖️Trade-offs already made

  • Plugins are executed synchronously, not as daemons

    • Why: Simpler state management; easier to debug; matches container lifecycle (start → network → stop → cleanup)
    • Consequence: Plugin startup overhead on every ADD/DEL; no daemon-based optimization; requires efficient plugins
  • Configuration is JSON files, not dynamic API

    • Why: Stateless, declarative config; easy to version control; no coordination daemon required
    • Consequence: No runtime config hot-reloading; runtimes must manage config distribution to all hosts; no global policy enforcement
  • Chain semantics: each plugin output feeds next plugin's input

    • Why: Composable; flexible; allows plugins to augment results (e.g., overlay + IPAM + firewall in order)
    • Consequence: Plugin order matters; debugging multi-plugin failures is harder; no built-in rollback if chain breaks
  • CNI library only handles networking, not storage/compute

    • Why: Narrow scope; single responsibility; reusable across container runtimes (Kubernetes, containerd, CRI-O)
    • Consequence: Runtimes must orchestrate multiple subsystems; no integrated lifecycle management

🪤Traps & gotchas

Namespace-aware execution: pkg/invoke/delegate.go may execute plugins in different Linux network namespaces (netns ops via vishvananda/netns)—test failures can be namespace-related, not code bugs. stdin/stdout contract: Plugins communicate results via stdout (JSON); stderr is for logging. If a plugin binary is missing or crashes, errors can be cryptic. Environment variables: CNI plugins receive containernetworking spec environment variables (CNI_COMMAND, CNI_NETNS, etc.)—cnitool constructs these, but manual testing requires them to be set correctly (see cnitool/cmd/add.go for examples). Backwards compatibility: The spec is versioned; see libcni/backwards_compatibility_test.go—changing api.go or conf.go can break existing plugins deployed in production. Plugin search path: cnitool/cmd/ assumes plugins are at /opt/cni/bin/ (or CNI_PATH env var); if missing, tests fail with 'executable not found' even if code is correct.

🏗️Architecture

💡Concepts to learn

  • Network Namespace (netns) — CNI plugins execute in or manipulate Linux network namespaces to isolate container networking from the host and other containers—understanding netns is essential to grasp why pkg/invoke uses vishvananda/netns
  • Delegate Plugin Pattern — The CNI spec and libcni/invoke allow a plugin to delegate to another plugin (e.g., a meta-plugin that calls bridge then ipam)—this is how complex networking stacks compose; see SPEC.md section on chaining
  • JSON Configuration Format (CNI NetConf) — libcni/conf.go parses CNI network config files (JSON) that specify which plugins to run and their parameters; understanding the format is mandatory for writing integrations or new plugins
  • Exit-on-error Plugin Chaining — The CNI spec defines how plugins in a chain are invoked and how errors are propagated (e.g., if ipam fails, the bridge plugin cleanup is still called); this is subtle and affects api.go's orchestration
  • IPAM (IP Address Management) — CNI separates network interface setup from IP allocation—a separate IPAM plugin (e.g., host-local, dhcp) handles IP assignment; understanding the separation explains the invoke pattern in pkg/invoke/delegate.go
  • Container Lifecycle Hooks (Add/Del/Check/Status) — The CNI spec defines these four (now six with Status/GC) operations that plugins must implement—libcni/api.go's interface mirrors these; understanding the contract is essential to API design decisions
  • Backwards Compatibility via Versioning — CNI spec versioning and libcni/backwards_compatibility_test.go show how the spec evolves without breaking deployed plugins—critical for understanding how changes are made safely in production-facing code
  • containernetworking/plugins — Reference implementations of CNI plugins (bridge, host-device, ipvlan, macvlan, etc.)—users of libcni typically deploy these or similar plugins to actually configure container networks
  • kubernetes/kubernetes — Kubernetes is the primary consumer and promoter of CNI; kubelet calls CNI plugins via libcni to manage pod networking
  • projectcalico/calico — A major CNI plugin implementation (Calico CNI) that conforms to the spec; real-world example of how plugins integrate with the CNI specification
  • flannel-io/flannel — Another widely-used CNI plugin providing overlay networking; demonstrates alternative plugin architecture and use cases beyond bridge networking
  • containerd/containerd — Container runtime that can be configured to use CNI plugins for networking; shows integration point for runtimes using this library

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add integration tests for cnitool commands with multiple CNI plugins

The cnitool package has command implementations (add.go, del.go, check.go, gc.go, status.go) but lacks integration tests that verify these commands work correctly with real plugin chains. Currently, pkg/invoke has integration tests (get_version_integration_test.go) but cnitool itself has no integration test coverage. This would catch regressions when CNI plugin orchestration changes.

  • [ ] Create cnitool/cmd/integration_test.go to test end-to-end command execution
  • [ ] Use test CNI plugins from pkg/invoke/fakes as mock implementations
  • [ ] Add test cases for: add with multiple plugins, del with cleanup, check for connectivity, gc for garbage collection
  • [ ] Update .github/workflows/test.yaml to run integration tests separately from unit tests
  • [ ] Document test setup in cnitool/README.md for future contributors

Add comprehensive test coverage for libcni/conf.go config parsing edge cases

The libcni/conf.go file handles critical CNI configuration parsing but conf_test.go likely lacks tests for edge cases like malformed JSON, missing required fields, version mismatches, and invalid network names. Given CNI's role in production container networking, robust config validation testing is essential.

  • [ ] Audit libcni/conf_test.go for missing test cases around error conditions
  • [ ] Add tests for: empty config files, invalid JSON, missing 'cniVersion' field, missing 'name' field, unknown plugin types
  • [ ] Add tests for config list validation (multiple plugins, circular dependencies if applicable)
  • [ ] Add property-based tests using table-driven test patterns to cover combinations of valid/invalid fields
  • [ ] Document expected behavior in libcni/README.md or code comments

Create platform-specific test files for pkg/ns namespace handling across Linux/Windows/Darwin

The pkg/ns directory has platform-specific implementations (ns_linux.go, ns_windows.go, ns_darwin.go) but likely no corresponding platform-specific test files that verify namespace operations work correctly on each OS. This is critical since namespace isolation is core CNI functionality and platform differences can introduce subtle bugs.

  • [ ] Create pkg/ns/ns_linux_test.go with tests for Linux namespace creation, joining, and cleanup
  • [ ] Create pkg/ns/ns_windows_test.go with tests for Windows-specific network namespace operations
  • [ ] Create pkg/ns/ns_darwin_test.go with tests for macOS compatibility (if applicable)
  • [ ] Use build tags (+build linux, +build windows, +build darwin) to ensure tests run only on appropriate platforms
  • [ ] Update .github/workflows/test.yaml to run platform-specific test matrices or document manual testing requirements

🌿Good first issues

  • Write integration tests for pkg/invoke/args.go that cover all CNI environment variable combinations (CNI_COMMAND, CNI_NETNS, CNI_CONTAINERID, etc.)—currently args_test.go exists but has gaps for edge cases like missing NETNS.: Improves robustness of plugin invocation; directly impacts runtime reliability; doesn't require architectural changes
  • Add examples/ directory with runnable Go code showing how to embed libcni in a custom container runtime (currently only cnitool exists; a minimal example runtime would help onboarding).: Documentation gap; many runtime authors learn by example; aligns with project goal of being easy to integrate
  • Expand Documentation/ with a troubleshooting guide for common cnitool failures (e.g., 'plugin not found', 'namespace join failed', 'invalid config JSON')—currently only cnitool/README.md and SPEC.md exist.: Users frequently struggle with CNI setup; a concise troubleshooting doc would reduce support burden and improve UX

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 7c27007 — Merge pull request #1177 from Abhigyan-Shekhar/Abhigyan-Shekhar-patch-1 (LionelJouin)
  • f1cb561 — docs: Fix broken link to "Version considerations" in SPEC.md (Abhigyan-Shekhar)
  • 1d67f89 — Merge pull request #1170 from Faaizz/main (squeed)
  • bab140e — Merge pull request #1172 from squeed/meeting-time-change (squeed)
  • c5470a9 — Merge pull request #1171 from scpeters/invalidcache_delnetwork (squeed)
  • 27db8d2 — README: change meeting to 10:00 US/East. (squeed)
  • a30dcc7 — Update test expectations to tolerate error (scpeters)
  • 36b1672 — Tolerate invalid cni cache for DelNetwork (scpeters)
  • 77b1001 — spec: update number of specifically defined terms (Faaizz)
  • c377947 — Merge pull request #1168 from squeed/meeting-to-biweekly (squeed)

🔒Security observations

The CNI project demonstrates generally good security practices as evidenced by CII Best Practices badge and OpenSSF Scorecard integration. The primary concern is the use of Go 1.21, which is now end-of-life and should be upgraded immediately. Dependencies appear reasonably current as of mid-2024, though ongoing monitoring is recommended. The project would benefit from SBOM generation, explicit security policy documentation, and regular dependency scanning in CI/CD pipelines. No hardcoded secrets, SQL injection risks, or XSS vulnerabilities were identified in the provided file structure. The codebase follows security best practices for a infrastructure/networking library.

  • Medium · Outdated Go Version — go.mod. The project specifies 'go 1.21' in go.mod. Go 1.21 was released in August 2023 and has reached end-of-life. Using outdated Go versions exposes the project to known security vulnerabilities that have been patched in newer releases. Fix: Update to Go 1.22 or later (preferably the latest stable release). Update the go.mod file and test thoroughly with the newer version.
  • Medium · Potential Outdated Dependencies — go.mod, go.sum. Several dependencies have timestamps from mid-2024 (golang.org/x/tools v0.24.0, google/pprof). While not necessarily vulnerable, periodic dependency audits should be performed to ensure no known CVEs exist in transitive dependencies. Fix: Run 'go list -json -m all | nancy sleuth' or 'go list -json -m all | osv-scanner' regularly. Execute 'go get -u' and 'go mod tidy' in CI/CD to detect vulnerabilities early.
  • Low · Missing SBOM/Dependency Tracking in Release Artifacts — RELEASING.md, CI/CD workflows. The repository does not appear to generate or publish Software Bill of Materials (SBOM) documents with releases. This makes it difficult for downstream consumers to perform supply chain security analysis. Fix: Add SBOM generation to the release process using tools like 'syft' or 'cyclonedx-gomod'. Include SBOM files in release artifacts for transparency.
  • Low · Incomplete Security Policy Documentation — Repository root. While there is a CONTRIBUTING.md and CODE-OF-CONDUCT.md, there is no visible SECURITY.md file that documents the security reporting policy and vulnerability disclosure process. Fix: Create a SECURITY.md file at the repository root following the GitHub security policy guidelines. Include responsible disclosure procedures and contact information.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · containernetworking/cni — RepoPilot