RepoPilotOpen in app →

k3s-io/k3s

Lightweight Kubernetes

Healthy

Healthy across the board

Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 1d ago
  • 19 active contributors
  • Distributed ownership (top contributor 26% of recent commits)
Show 3 more →
  • Apache-2.0 licensed
  • CI configured
  • Tests present

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/k3s-io/k3s)](https://repopilot.app/r/k3s-io/k3s)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/k3s-io/k3s on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: k3s-io/k3s

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/k3s-io/k3s shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit 1d ago
  • 19 active contributors
  • Distributed ownership (top contributor 26% of recent commits)
  • Apache-2.0 licensed
  • CI configured
  • Tests present

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live k3s-io/k3s repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/k3s-io/k3s.

What it runs against: a local clone of k3s-io/k3s — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in k3s-io/k3s | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 31 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>k3s-io/k3s</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of k3s-io/k3s. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/k3s-io/k3s.git
#   cd k3s
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of k3s-io/k3s and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "k3s-io/k3s(\\.git)?\\b" \\
  && ok "origin remote is k3s-io/k3s" \\
  || miss "origin remote is not k3s-io/k3s (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "cmd/k3s/main.go" \\
  && ok "cmd/k3s/main.go" \\
  || miss "missing critical file: cmd/k3s/main.go"
test -f "cmd/server/main.go" \\
  && ok "cmd/server/main.go" \\
  || miss "missing critical file: cmd/server/main.go"
test -f "cmd/agent/main.go" \\
  && ok "cmd/agent/main.go" \\
  || miss "missing critical file: cmd/agent/main.go"
test -f "Makefile" \\
  && ok "Makefile" \\
  || miss "missing critical file: Makefile"
test -f ".github/workflows/build-k3s.yaml" \\
  && ok ".github/workflows/build-k3s.yaml" \\
  || miss "missing critical file: .github/workflows/build-k3s.yaml"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 31 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~1d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/k3s-io/k3s"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

K3s is a lightweight, production-ready Kubernetes distribution packaged as a single binary (<100MB) that bundles containerd, Flannel CNI, CoreDNS, Traefik, and other components into one cohesive system. It strips away complexity and OS dependencies while maintaining full Kubernetes conformance, using sqlite3 as the default backend (with etcd3, MariaDB, MySQL, Postgres support) and exposing the kubelet API over websocket tunnels to eliminate worker node port exposure. Monorepo structure: cmd/ contains entry points (cmd/agent/main.go, cmd/cert/main.go) that bootstrap the single binary; pkg/ (inferred from .github workflows and Go patterns) holds component managers and integrations; .github/workflows/ defines the build and test orchestration; Makefile drives local compilation; integration and unit tests live alongside code. The launcher wraps Kubernetes and bundled components into a unified startup flow.

👥Who it's for

DevOps engineers and embedded systems developers who need Kubernetes on edge devices, IoT hardware, CI/CD environments, or resource-constrained ARM machines without the operational overhead of managing etcd, multiple control plane services, or complex networking—and maintainers of the k3s-io organization who contribute to the core Kubernetes distribution.

🌱Maturity & risk

Highly mature and production-ready: it's a CNCF project with full Kubernetes conformance testing, comprehensive CI/CD via GitHub Actions (.github/workflows/ contains 20+ test pipelines including nightly-install.yaml, integration.yaml, e2e.yaml), extensive documentation (BUILDING.md, CONTRIBUTING.md), and active maintenance evidenced by the large Go codebase (1.76MB) and continuous releases via release.yml. The project has strong governance (GOVERNANCE.md, CODEOWNERS, DCO enforcement) and security practices (SECURITY.md, govulncheck.yml, trivy-scan.yml).

Low operational risk but moderate dependency complexity: k3s pins specific versions of 20+ upstream projects (etcd, containerd, runc, docker, flannel, traefik) via go.mod replace directives, creating maintenance burden when upstream changes—however, this is intentional for consistency. The single-binary design means a k3s bug affects all bundled components simultaneously. The project is actively maintained with frequent releases, but runs on the Kubernetes release cycle (1.26.2 currently), meaning breaking changes inherit from upstream Kubernetes.

Active areas of work

The project is actively developed with recent updates to Go 1.26.2, CI pipelines for build-k3s.yaml and integration.yaml, security scanning via govulncheck.yml and trivy-scan.yml, and release automation via release.yml. The .github/workflows/ show ongoing work on airgap deployments (airgap.yaml), E2E testing, and nightly install validation—indicating focus on reliability, security patching, and edge-case scenario coverage.

🚀Get running

Clone, build, and run a dev K3s server:

git clone https://github.com/k3s-io/k3s.git
cd k3s
make
# Binary output in dist/artifacts/k3s
./dist/artifacts/k3s server  # Starts a single-node K3s cluster

For agent nodes, run ./dist/artifacts/k3s agent --server https://<server-ip>:6443 --token <token>.

Daily commands: Local dev server:

make
./dist/artifacts/k3s server

Multi-node cluster (agent joins server):

# Terminal 1: Start server
./dist/artifacts/k3s server
# Terminal 2: Get token from server and start agent
token=$(cat /var/lib/rancher/k3s/server/node-token)
./dist/artifacts/k3s agent --server https://127.0.0.1:6443 --token $token

Interact with cluster via standard kubectl (bundled in binary).

🗺️Map of the codebase

  • cmd/k3s/main.go — Main entry point for k3s binary; orchestrates server and agent initialization across all platforms.
  • cmd/server/main.go — Server-side initialization; manages control plane components including API server, scheduler, and controller manager.
  • cmd/agent/main.go — Agent-side initialization; handles node bootstrapping and joining to existing clusters.
  • Makefile — Build system; defines compilation, packaging, and distribution targets for all supported platforms.
  • .github/workflows/build-k3s.yaml — CI/CD pipeline definition; controls automated builds, tests, and release artifact generation.
  • go.mod — Go module manifest with critical k3s-io forks; defines containerd, cri-dockerd, and networking stack versions.
  • BUILDING.md — Developer setup and build documentation; essential for understanding build processes and environment requirements.

🛠️How to make changes

Add a new kubectl subcommand

  1. Create new command handler in cmd/kubectl/ directory alongside main.go (cmd/kubectl/main.go)
  2. Register command in k3s binary multiplexer by updating cmd/k3s/main.go to route to kubectl (cmd/k3s/main.go)
  3. Add unit tests for new kubectl command in _test.go file in same directory (cmd/kubectl/main.go)
  4. Update integration tests in .github/workflows/integration.yaml to validate new subcommand (.github/workflows/integration.yaml)

Add a new utility command (like cert, token, encrypt)

  1. Create cmd/newcommand/main.go following pattern of cmd/cert/main.go or cmd/token/main.go (cmd/cert/main.go)
  2. Register command in cmd/k3s/main.go to make it accessible via k3s newcommand (cmd/k3s/main.go)
  3. Add corresponding tests in cmd/newcommand/main_test.go (cmd/k3s/main_test.go)
  4. Document new command in BUILDING.md and relevant ADR if architectural impact (BUILDING.md)

Add platform-specific behavior (Linux/Windows)

  1. Create platform-specific file: cmd/k3s/main_newplatform.go following existing main_linux.go and main_windows.go patterns (cmd/k3s/main_linux.go)
  2. Export platform init function and call from cmd/k3s/main.go within appropriate build tag conditional (cmd/k3s/main.go)
  3. Add corresponding platform test file: cmd/k3s/main_newplatform_test.go (cmd/k3s/main_test.go)
  4. Update GitHub Actions workflow for platform-specific testing in .github/workflows/e2e.yaml (.github/workflows/e2e.yaml)

Create a new Architecture Decision Record (ADR)

  1. Create docs/adrs/new-feature-adr.md following existing ADR format in docs/adrs/record-architecture-decisions.md (docs/adrs/record-architecture-decisions.md)
  2. Reference ADR in commit message and PR description for maintainer review (CONTRIBUTING.md)
  3. Update ROADMAP.md if ADR represents significant architectural change or feature (ROADMAP.md)

🔧Why these technologies

  • Go 1.26.2 — Single-binary cross-platform compilation; small footprint ideal for edge and IoT deployments; performance critical for resource-constrained environments.
  • containerd v2 (k3s fork) — Lightweight container runtime; significantly reduces memory/disk vs Docker; vendored to ensure compatibility and control.
  • etcd — Distributed key-value store for Kubernetes state; provides strong consistency and HA cluster coordination.
  • Flannel/Kube-router (pluggable) — Simple, lightweight networking; CNI-based architecture allows operator choice while keeping defaults minimal.
  • systemd/OpenRC/Docker integration — Multi-platform init system support maximizes compatibility across Linux distros, Windows, and containerized deployments.

⚖️Trade-offs already made

  • Bundled containerd instead of delegating to host Docker/CRI

    • Why: Simplifies installation; eliminates dependency hell and version mismatch issues; reduces disk footprint vs full Docker.
    • Consequence: K3s manages full container lifecycle; users cannot substitute CRI implementations without rebuilding (cri-dockerd available as opt-in).
  • Single binary with platform-specific init files

    • Why: Unified build/release process; leverages Go's cross-compilation; drastically simplifies installation.
    • Consequence: Binary size larger than pure CLI tools; all features shipped even if unused; platform-specific code paths must be carefully maintained.
  • Embedded etcd instead of requiring external cluster

    • Why: Zero-external-dependency single-node deployments; drastically improves adoption for edge/development.
    • Consequence: High-availability requires multi-node setup with external etcd or managed state; single-node resilience limited to disk persistence.
  • Forked/vendored critical dependencies (containerd, cri-dockerd, kube-router)

    • Why: Lock to tested versions; apply k3s-specific patches; avoid upstream breaking changes mid-release cycle.
    • Consequence: Maintenance burden for backports; potential lag behind upstream features/security patches; complexity in dependency upg

🪤Traps & gotchas

K3s writes all state to /var/lib/rancher/k3s/ (Linux) or platform-equivalent; multiple instances on same machine conflict unless using different data directories (--data-dir flag). The single binary is dynamically linked to system libc/libsystemd in some distributions—Alpine Linux requires different handling. Websocket tunnel between server and agent requires proper firewall rules on port 6443 and 10250. SQLite3 backend not suitable for >100 node clusters (upstream Kubernetes limitation); must switch to etcd3/MySQL/Postgres via --datastore-endpoint and --datastore-cafile. The bundled Traefik and Klipper-lb have service port conflicts if multiple K3s instances run on same network.

🏗️Architecture

💡Concepts to learn

  • Websocket Tunnel (kubelet API exposure) — K3s eliminates the need to expose port 10250 on worker nodes by tunneling the kubelet API to control plane over secure websockets—critical architectural difference from stock Kubernetes that enables secure edge deployments
  • Pluggable Datastore (SQLite3/etcd3/MySQL/Postgres) — K3s abstracts Kubernetes API server storage backend and defaults to embedded SQLite3 instead of etcd3, enabling single-binary deployment without separate datastore service—requires understanding API persistence and consistency models
  • Container Runtime Interface (CRI) with containerd — K3s bundles and patches containerd as its CRI implementation; understanding CRI is essential for debugging container lifecycle, image handling, and OCI compliance
  • Container Networking Interface (CNI) with Flannel — Flannel provides pod-to-pod networking in K3s via VXLAN overlay by default; alternatives (Kube-router netpol) are bundled—essential for understanding multi-node cluster networking and network policies
  • Binary Distribution and Self-Contained Packaging — K3s goal is a single <100MB binary with zero external dependencies beyond kernel/cgroups—requires careful dependency management, cross-compilation (arm64/armv7/amd64), and static linking where possible
  • Kubernetes Conformance Testing — K3s is CNCF-certified fully conformant Kubernetes; the integration.yaml and e2e.yaml pipelines run Kubernetes conformance test suite to validate specification compliance despite bundling and patching
  • kubernetes/kubernetes — Upstream Kubernetes distribution that K3s packages and extends; k3s replaces/wraps core Kubernetes binaries
  • rancher/rancher — Management platform built on top of K3s for multi-cluster orchestration and UI; tight integration with K3s lifecycle
  • containerd/containerd — Container runtime bundled and patched in K3s; maintained fork at k3s-io/containerd via go.mod replace
  • flannel-io/flannel — Default CNI implementation bundled in K3s; enables pod networking across nodes
  • microk8s-io/microk8s — Alternative lightweight Kubernetes distribution for edge/IoT; similar value proposition but different packaging (snap) and architectural choices

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add integration tests for cmd/token token management operations

The cmd/token directory exists but there are no visible integration tests in .github/workflows for token generation, rotation, and validation. K3s uses tokens for cluster bootstrap and authentication, making this critical path. The integration.yaml and e2e.yaml workflows could be enhanced with token-specific test cases to ensure token operations don't regress across releases.

  • [ ] Review cmd/token/main.go to understand token operations (generation, hashing, validation)
  • [ ] Check existing test patterns in integration.yaml and e2e.yaml workflows
  • [ ] Create test scenarios: token creation, token rotation, invalid token rejection, token lifecycle
  • [ ] Add test cases to .github/workflows/integration.yaml that exercise token management endpoints
  • [ ] Document expected token formats and validation rules in CONTRIBUTING.md if not already present

Add govulncheck coverage for k3s-io maintained fork dependencies

The repo has 18+ k3s-io maintained fork replacements (containerd, etcd, cri-dockerd, etc.) in go.mod but the govulncheck.yml workflow likely only scans the main module. These forks represent significant maintenance surface area and may have divergent vulnerability profiles. Adding specific vulnerability scanning for these dependencies would close a security gap.

  • [ ] Review .github/workflows/govulncheck.yml to understand current scanning scope
  • [ ] Identify which k3s-io forks are actively maintained vs. stable (check MAINTAINERS and fork commits)
  • [ ] Extend govulncheck workflow to explicitly scan k3s-io/containerd, k3s-io/etcd, k3s-io/cri-dockerd, etc.
  • [ ] Add a matrix strategy to run govulncheck against each major fork dependency
  • [ ] Document fork maintenance status and vulnerability scan results in MAINTAINERS or a SECURITY_FORKS.md file

Add unit tests for cmd/cert certificate management operations

The cmd/cert/main.go exists for certificate operations (likely serving kubeconfig, CA certs for secure communication) but cmd/k3s/main_test.go appears to be the only visible test file in cmd/. Certificate handling is security-critical. Unit tests for cert generation, rotation, validation, and expiration checks would improve reliability and prevent regressions.

  • [ ] Create cmd/cert/main_test.go following patterns from cmd/k3s/main_test.go
  • [ ] Review cmd/cert/main.go to identify certificate operations (generation, signing, validation, renewal)
  • [ ] Write unit tests covering: valid cert generation, expired cert handling, invalid cert rejection, cert chain validation
  • [ ] Add test cases for SA (Service Account) token binding if applicable
  • [ ] Run tests locally with 'make test' and verify coverage meets project standards (check .github/workflows/unitcoverage.yaml thresholds)

🌿Good first issues

  • Add integration test coverage for SQLite3 schema migrations in scenarios where K3s upgrades across major versions—current integration.yaml likely covers basic sqlite3 operation but not complex upgrade paths
  • Document and test the airgap deployment flow end-to-end (currently airgap.yaml exists but is underdocumented)—add step-by-step examples to BUILDING.md for offline environments
  • Extend unit test coverage for cmd/cert/main.go certificate generation logic, which is critical for secure agent-server communication but has minimal test harness compared to agent startup code

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 5463d00 — Fix install-go cache key (brandond)
  • 8cd83c1 — Bump kine to v0.15.0 (brandond)
  • 0f05ca2 — add lost return (nameearly)
  • df4d34f — Fix s3 path (rafaelbreno)
  • 72ce287 — chore: Update rancher/klipper-lb image versions (github-actions[bot])
  • 20e1096 — Improve K3s BUILDING and CONTRIBUTING guides (manuelbuil)
  • cbb7c2a — Test for cri-dockerd exec (vitorsavian)
  • eb39518 — Bump cri-dockerd and cri-tools (vitorsavian)
  • ef3bb65 — chore: Update CoreDNS image version in manifests/coredns.yaml (github-actions[bot])
  • ab8bc39 — chore: Update CoreDNS image version in scripts/airgap/image-list (github-actions[bot])

🔒Security observations

  • High · Privileged Container Execution in docker-compose.yml — docker-compose.yml - server and agent services. The docker-compose.yml file configures both server and agent services with 'privileged: true'. This grants the container full access to the host system, including all devices and capabilities, significantly increasing the attack surface and potential for privilege escalation. Fix: Remove 'privileged: true' and use specific capability grants (e.g., 'cap_add') only when necessary. If privileged mode is required for Kubernetes operation, document the requirement and consider alternative security controls like AppArmor or seccomp profiles.
  • High · Kubeconfig File World-Readable Permissions — docker-compose.yml - server service environment variable K3S_KUBECONFIG_MODE. The docker-compose.yml sets 'K3S_KUBECONFIG_MODE=666', making the kubeconfig file readable and writable by any user. Kubeconfig files contain authentication tokens and certificates that should be restricted to the owner only. This violates the principle of least privilege. Fix: Change 'K3S_KUBECONFIG_MODE=666' to '0600' to restrict access to the owner only. Ensure kubeconfig files are never world-readable in production environments.
  • High · Sensitive Token in Environment Variable — docker-compose.yml - server service environment variable K3S_TOKEN. The K3S_TOKEN is passed via docker-compose environment variables without encryption. Environment variables can be exposed in logs, process listings, and container inspect output. This is a sensitive authentication token for cluster communication. Fix: Use Docker secrets or external secret management systems (e.g., HashiCorp Vault) instead of environment variables. Never pass sensitive tokens through environment variables in production. Use '--token-file' flag if available as an alternative.
  • Medium · Exposed Kubernetes API Server Port — docker-compose.yml - server service ports section. The Kubernetes API server is exposed on port 6443 without any network segmentation or authentication context shown. This could allow unauthorized access to the cluster control plane if network policies are not properly configured. Fix: Restrict access to port 6443 using firewall rules or network policies. Use a reverse proxy with mutual TLS authentication. Ensure only authorized clients can communicate with the API server. Consider using network segmentation and VPN access for production deployments.
  • Medium · Missing Security Headers Configuration — docker-compose.yml - server service ports section. No explicit security headers configuration is visible for the ingress controller exposed on ports 80 and 443. This could allow for various web-based attacks if the ingress is used for public-facing applications. Fix: Configure ingress controller with security headers (X-Content-Type-Options, X-Frame-Options, Strict-Transport-Security, CSP). Implement network policies and RBAC rules. Use TLS/HTTPS exclusively and redirect HTTP to HTTPS.
  • Medium · No Resource Limits Defined — docker-compose.yml - server and agent services. The docker-compose.yml does not define CPU or memory resource limits for the containers. This could lead to resource exhaustion attacks or denial of service conditions affecting the host system. Fix: Add 'resources' section with memory and CPU limits and reservations. For example: 'resources: { limits: { cpus: '2', memory: '2G' }, reserves: { cpus: '1', memory: '1G' } }'
  • Medium · Incomplete docker-compose.yml File — docker-compose.yml - agent service. The agent service definition in docker-compose.yml appears to be truncated/incomplete, with environment variable 'e' starting but not finishing. This suggests the file may have syntax errors or missing critical configuration. Fix: Complete and validate the docker-compose.yml file. Run 'docker-compose config' to check for syntax errors. Ensure all services are properly configured with all required security settings.
  • Low · Go Version 1.26.2 May Contain Vulnerabilities — go.mod - go version declaration. The project uses Go 1.26.2. While this appears to be a recent version, it's important to monitor for security advisories. Go dependency vulnerabilities should be tracked. Fix:

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · k3s-io/k3s — RepoPilot