RepoPilotOpen in app →

m1k1o/neko

A self hosted virtual browser that runs in docker and uses WebRTC.

Healthy

Healthy across all four use cases

Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 1d ago
  • 14 active contributors
  • Apache-2.0 licensed
Show 3 more →
  • CI configured
  • Single-maintainer risk — top contributor 86% of recent commits
  • No test directory detected

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/m1k1o/neko)](https://repopilot.app/r/m1k1o/neko)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/m1k1o/neko on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: m1k1o/neko

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/m1k1o/neko shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 1d ago
  • 14 active contributors
  • Apache-2.0 licensed
  • CI configured
  • ⚠ Single-maintainer risk — top contributor 86% of recent commits
  • ⚠ No test directory detected

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live m1k1o/neko repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/m1k1o/neko.

What it runs against: a local clone of m1k1o/neko — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in m1k1o/neko | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 31 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>m1k1o/neko</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of m1k1o/neko. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/m1k1o/neko.git
#   cd neko
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of m1k1o/neko and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "m1k1o/neko(\\.git)?\\b" \\
  && ok "origin remote is m1k1o/neko" \\
  || miss "origin remote is not m1k1o/neko (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "README.md" \\
  && ok "README.md" \\
  || miss "missing critical file: README.md"
test -f "Dockerfile.tmpl" \\
  && ok "Dockerfile.tmpl" \\
  || miss "missing critical file: Dockerfile.tmpl"
test -f "client/package.json" \\
  && ok "client/package.json" \\
  || miss "missing critical file: client/package.json"
test -f ".github/workflows/dockerhub.yml" \\
  && ok ".github/workflows/dockerhub.yml" \\
  || miss "missing critical file: .github/workflows/dockerhub.yml"
test -f "apps/chromium/Dockerfile" \\
  && ok "apps/chromium/Dockerfile" \\
  || miss "missing critical file: apps/chromium/Dockerfile"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 31 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~1d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/m1k1o/neko"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Neko is a self-hosted virtual browser that runs entirely in Docker and streams the desktop over WebRTC, enabling users to access a fully-functional browser (Chromium, Firefox, Brave, Chrome) from anywhere via a web client. It solves the problem of needing secure, isolated, and shareable browser access without installing software locally—multiple users can connect simultaneously to the same virtual environment. Monorepo structure: Go backend server in root with WebRTC streaming logic, TypeScript/Vue 2.7 client in /client (195K+ lines), multiple application Dockerfiles in /apps/{brave,chromium,firefox,google-chrome} with nginx/supervisor configs, and /website for documentation. CI workflows (/.github/workflows/) orchestrate multi-architecture (Intel/ARM/Nvidia) Docker image builds.

👥Who it's for

DevOps engineers and system administrators deploying containerized services, developers testing web applications in isolated environments, privacy-conscious users wanting sandboxed browsing, and teams needing shared browser access without maintaining individual installations.

🌱Maturity & risk

Actively developed and production-ready. The project has comprehensive CI/CD pipelines (11+ GitHub workflows for client, server, Docker builds, and multi-arch support), releases tracked on GitHub, 698K+ lines of Go backend code, and evidence of sustained maintenance. Multiple browser flavors (Chromium, Firefox, Brave, Chrome) indicate maturity.

Low-to-moderate risk: single maintainer (m1k1o) is a dependency concern, but the project has solid CI coverage and appears well-tested. Docker-heavy deployment means version pinning is critical (see apps/*/Dockerfile). The WebRTC streaming adds complexity; edge cases in different browsers could be problematic. No obvious security audit history visible in SECURITY.md snippet.

Active areas of work

Active development on multi-arch Docker support (workflows for ghcr_intel, ghcr_nvidia), client-side TypeScript enhancements (client_build workflow), and server-side Go testing (server_test workflow). Version tracking shows 2.5.0 in package.json; GitHub Actions indicates recent build/deploy activity.

🚀Get running

Clone and install: git clone https://github.com/m1k1o/neko.git && cd neko. For the Go backend: make (assumes Makefile exists). For the Vue client: cd client && npm install && npm run serve. To run locally with Docker: docker build -t neko . && docker run -p 8080:8080 neko.

Daily commands: Backend: make or go build (check Makefile). Frontend client: cd client && npm run serve (Vue CLI dev server on port 8080). Docker: docker-compose up or individual docker build -f apps/{browser}/Dockerfile.

🗺️Map of the codebase

  • README.md — Entry point for understanding the project's purpose, architecture, deployment modes, and supported applications
  • Dockerfile.tmpl — Master Docker template that defines the containerization strategy and base runtime for all neko instances
  • client/package.json — Frontend dependencies and build scripts; defines the Vue.js client framework and WebRTC dependencies
  • .github/workflows/dockerhub.yml — Production CI/CD pipeline that builds and publishes docker images; critical for release automation
  • apps/chromium/Dockerfile — Reference implementation for app-specific containerization; template for adding new browser/application variants
  • .github/workflows/client_build.yml — Frontend build pipeline; must pass before client code reaches production
  • .github/workflows/server_test.yml — Backend test suite execution; ensures streaming server stability and WebRTC correctness

🛠️How to make changes

Add a New Browser/Application Variant

  1. Create a new directory under apps/ (e.g., apps/my-browser/) (apps/)
  2. Copy and adapt Dockerfile from a similar app, installing your target application and setting up user/permissions (apps/chromium/Dockerfile)
  3. Create a supervisord.conf in the new app folder to manage process startup and respawn policy (apps/chromium/supervisord.conf)
  4. Create an openbox.xml window manager config or adapt from an existing app for GUI isolation (apps/chromium/openbox.xml)
  5. Add a Dockerfile.nvidia variant if GPU acceleration is needed (optional) (apps/chromium/Dockerfile.nvidia)
  6. Update .github/workflows/image_app.yml to include the new variant in the build matrix (.github/workflows/image_app.yml)

Add a New CI/CD Target Registry

  1. Create a new GitHub Actions workflow file in .github/workflows/ (e.g., myreg.yml) (.github/workflows/)
  2. Reference the Dockerfile.tmpl template logic and adapt push targets and registry credentials (.github/workflows/dockerhub.yml)
  3. Define a matrix for multi-platform/multi-variant builds (CPU, NVIDIA, app variants) (.github/workflows/image_app.yml)
  4. Add secrets to GitHub repo settings (registry token, credentials) and reference in workflow env (.github/workflows/ghcr.yml)

Update Frontend Client UI/Features

  1. Install or add dependencies in client/package.json (e.g., new WebRTC library, Vue plugin) (client/package.json)
  2. Create or modify Vue component files in client/src (follow Vue 3 + TypeScript conventions) (client)
  3. Run client build workflow to transpile TypeScript and bundle for production (.github/workflows/client_build.yml)
  4. Ensure client tests pass in client_test.yml workflow before merging (.github/workflows/client_test.yml)

🔧Why these technologies

  • Docker + Multi-variant Dockerfiles — Containerization ensures reproducible, portable isolated environments; multi-variant approach (CPU/NVIDIA) allows resource-flexible deployments across bare metal and cloud
  • WebRTC (P2P streaming) — Provides low-latency, encrypted, peer-to-peer video/audio transport without central media server; browser-native support eliminates client installation
  • Vue.js + TypeScript (Client) — Modern SPA framework for real-time canvas rendering and WebRTC peer connection management; TypeScript ensures type safety for complex state
  • Supervisord + Openbox — Supervisord manages process lifecycle and respawning within containers; Openbox provides minimal X11 window manager for headless GUI rendering
  • GitHub Actions + Matrix Builds — Automates multi-platform (ARM, x86), multi-app, and GPU-variant image builds; public CI eliminates contributor setup friction

⚖️Trade-offs already made

  • Self-hosted only (no managed SaaS offering)

    • Why: Reduces operational complexity and licensing costs; gives users full control over data and environment
    • Consequence: Users must manage Docker deployment, networking, and security themselves; no centralized monitoring or support unless self-provided
  • Single-user per container instance

    • Why: Simplifies isolation, security model, and state management; avoids user context switching and window manager conflicts
    • Consequence: Horizontal scaling requires spawning multiple containers; higher resource footprint for multi-user deployments
  • X11 + Openbox for GUI (vs. Wayland, VNC)

    • Why: X11 is mature, widely compatible with Linux applications; Openbox is minimal and fast for headless rendering
    • Consequence: Wayland apps may not work; X11 has known security model limitations; VNC alternative would add latency
  • Browser-based client (no native electron/mobile apps)

    • Why: Maximizes reach (any OS + browser); single codebase for all clients; no app store distribution friction
    • Consequence: Limited by browser APIs (no low-level hardware access); slightly higher latency than native streaming; no offline capability

🚫Non-goals (don't propose these)

  • Does not provide built-in user authentication or multi-tenant isolation (assumes reverse proxy or VPN layer)
  • Does not handle persistent storage or data synchronization across container restarts
  • Does not support Windows or macOS as host OS (Linux host required for X11/container runtime)
  • Does not provide centralized monitoring, logging, or metrics aggregation (relies on docker logs and external tools)

🪤Traps & gotchas

  1. Docker-in-Docker or privileged mode: Virtual browser likely requires elevated privileges or GPU access for rendering—check docker-compose or deployment docs for required flags (--privileged, --gpus). 2) WebRTC port exposure: Streaming over WebRTC requires proper ICE candidate handling and port forwarding (typically 8080 for HTTP, plus dynamic WebRTC ports); misconfiguration breaks connectivity. 3) Browser policy configs: Each app (Chromium, Firefox) has policies.json or prefs.json that may enforce restrictions—modifying these may require container rebuild. 4) Go/C dependencies: Building locally requires X11 dev headers and likely ffmpeg; full setup is easiest via Docker. 5) Supervisor configs: apps/*/supervisord.conf controls process lifecycle; must restart container for config changes to apply.

🏗️Architecture

💡Concepts to learn

  • WebRTC (Web Real-Time Communication) — Core transport for neko's low-latency desktop streaming; understanding ICE candidates, STUN/TURN servers, and peer connection negotiation is essential for diagnosing streaming failures
  • X11 Display Server & Xvfb (Virtual Framebuffer) — Neko runs a headless X11 server with virtual display to capture browser rendering; knowing how X11 forwarding and Xvfb work explains why the Docker setup requires specific X11 environment variables
  • Docker Multi-Stage Builds & Multi-Platform Images (buildx) — Neko's CI builds separate Intel/ARM/Nvidia variants using buildx; understanding image layering and cross-platform compatibility is critical for modifying Dockerfiles
  • Supervisor (Process Control System) — Every browser app (Chromium, Firefox) uses supervisord.conf to manage X11, browser, and audio processes; process failure or restart loops often stem from supervisor misconfiguration
  • Vuex State Management (Vue 2) — Client-side state (user input, remote control, connection status) is managed via Vuex; understanding mutations, actions, and getters is essential for frontend feature development
  • Video Codecs & Encoding (VP8/H.264, ffmpeg) — Desktop capture must be encoded for WebRTC transmission; ffmpeg or similar is likely used to encode X11 output; codec choice affects latency and bandwidth
  • OpenBox Window Manager — Lightweight window manager used in neko's virtual environment; configs in apps/*/openbox.xml control window decorations and keyboard shortcuts; affects UX of the remote browser
  • novnc/noVNC — Alternative remote desktop via VNC over WebSocket; solves similar 'access desktop from browser' problem but uses older VNC protocol instead of WebRTC
  • kasmweb/KasmVNC — Commercial VNC-based remote browser; direct competitor with similar multi-user and Docker deployment model
  • pion/webrtc — Go WebRTC library likely used by neko's backend for real-time media streaming; essential for understanding the WebRTC transport layer
  • jqlang/jq — Not directly related, but neko's configuration often involves JSON parsing; useful for debugging Docker/compose configs
  • moby/moby — Docker engine itself; understanding container internals helps troubleshoot neko's privileged mode and device access requirements

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add integration tests for WebRTC connection flow in server_test.yml workflow

The repo has multiple CI workflows (client_test.yml, webpage_test.yml, server_test.yml) but there's no visible end-to-end test validating the WebRTC streaming pipeline between server and client. This is critical for a virtual browser project. Adding integration tests would catch regressions in the core streaming functionality before release.

  • [ ] Review existing server_test.yml workflow to understand current test coverage
  • [ ] Create integration test suite (likely in Go/server codebase) that validates: WebRTC peer connection establishment, video/audio stream initialization, and basic control message flow
  • [ ] Add test step to server_test.yml that runs these integration tests against a minimal neko server instance
  • [ ] Document test setup requirements (e.g., headless browser environment for client-side testing)

Add Dockerfile validation and multi-arch build testing for app variants

The repo maintains 8+ app-specific Dockerfiles (brave, chromium, firefox, etc.) with both standard and nvidia variants, but there's no CI workflow validating that all variants build successfully across architectures. New contributors could easily break builds. This would catch issues early.

  • [ ] Create new GitHub Actions workflow: .github/workflows/dockerfile_lint.yml
  • [ ] Add hadolint step to validate Dockerfile syntax across all apps//Dockerfile and apps//Dockerfile.nvidia
  • [ ] Add matrix build job that tests building 2-3 representative app variants (e.g., chromium, firefox) on both amd64 and arm64 to catch architecture-specific issues
  • [ ] Document in CONTRIBUTING.md (if exists) or create it, specifying which apps to test before PR submission

Add client library (build:lib) export tests and TypeScript declaration validation

The package.json shows neko-client can be built as a library (build:lib script generates neko-lib), but there are no tests validating the library exports, TypeScript definitions, or that consuming projects can actually import and use it. This is important for projects integrating neko as a dependency.

  • [ ] Create test file: src/tests/lib-exports.test.ts to validate all public exports from src/lib.ts are accessible
  • [ ] Add TypeScript compilation check in client_test.yml to ensure generated .d.ts files are valid and complete
  • [ ] Create minimal example consumer project in tests/lib-integration/ that imports neko-lib and validates basic API surface (e.g., event emitter, connection setup)
  • [ ] Add build:lib step to client_test.yml workflow to catch export/declaration issues before release

🌿Good first issues

  • Add TypeScript types and tests to /client/src/store—currently no visible test files for Vuex state mutations and actions despite 195K TypeScript LOC, improving testability for state management changes.
  • Create integration tests for multi-user simultaneous connection scenarios in the Go backend—no visible server_test artifacts in file list; WebRTC streaming under load needs coverage.
  • Document environment variables and Docker build args for /apps Dockerfiles—no visible .env.example or BUILD_ARGS documentation, making it hard for contributors to customize browser policies or supervisor settings.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 2c124c5 — Upgrade go dependencies (#644) (m1k1o)
  • fdb0be3 — add scam warning alert to homepage. (m1k1o)
  • 2e28ec5 — Add latest version check and update logic for GHCR workflows (#637) (m1k1o)
  • a545115 — fix regression #639, #638. (m1k1o)
  • c9b184c — update docker compose. (m1k1o)
  • dbc2206 — go mod tidy. (m1k1o)
  • 920c34b — update dependencies in go.mod and go.sum. (m1k1o)
  • a4f1cfb — refactor: standardize constant names and improve goroutine handling across multiple files. (m1k1o)
  • 79e4980 — chore: upgrade Go version to 1.25. (m1k1o)
  • c54bcf1 — fix(api): update profile handling to ensure only the name field is modified. (m1k1o)

🔒Security observations

Failed to generate security analysis.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · m1k1o/neko — RepoPilot