weaveworks/scope
Monitoring, visualisation & management for Docker & Kubernetes
Stale — last commit 3y ago
weakest axislast commit was 3y ago; no CI workflows detected
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
last commit was 3y ago; no CI workflows detected
- ✓11 active contributors
- ✓Distributed ownership (top contributor 49% of recent commits)
- ✓Apache-2.0 licensed
Show all 6 evidence items →Show less
- ✓Tests present
- ⚠Stale — last commit 3y ago
- ⚠No CI workflows detected
What would change the summary?
- →Use as dependency Mixed → Healthy if: 1 commit in the last 365 days
- →Deploy as-is Mixed → Healthy if: 1 commit in the last 180 days
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Forkable" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/weaveworks/scope)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/weaveworks/scope on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: weaveworks/scope
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/weaveworks/scope shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
WAIT — Stale — last commit 3y ago
- 11 active contributors
- Distributed ownership (top contributor 49% of recent commits)
- Apache-2.0 licensed
- Tests present
- ⚠ Stale — last commit 3y ago
- ⚠ No CI workflows detected
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live weaveworks/scope
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/weaveworks/scope.
What it runs against: a local clone of weaveworks/scope — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in weaveworks/scope | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 1066 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of weaveworks/scope. If you don't
# have one yet, run these first:
#
# git clone https://github.com/weaveworks/scope.git
# cd scope
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of weaveworks/scope and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "weaveworks/scope(\\.git)?\\b" \\
&& ok "origin remote is weaveworks/scope" \\
|| miss "origin remote is not weaveworks/scope (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f "app/collector.go" \\
&& ok "app/collector.go" \\
|| miss "missing critical file: app/collector.go"
test -f "app/router.go" \\
&& ok "app/router.go" \\
|| miss "missing critical file: app/router.go"
test -f "client/app/scripts/components/app.js" \\
&& ok "client/app/scripts/components/app.js" \\
|| miss "missing critical file: client/app/scripts/components/app.js"
test -f "app/api_topology.go" \\
&& ok "app/api_topology.go" \\
|| miss "missing critical file: app/api_topology.go"
test -f "client/app/scripts/charts/nodes-chart.js" \\
&& ok "client/app/scripts/charts/nodes-chart.js" \\
|| miss "missing critical file: client/app/scripts/charts/nodes-chart.js"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 1066 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~1036d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/weaveworks/scope"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Weave Scope is a real-time visualization and management platform for Docker containers and Kubernetes clusters that automatically generates interactive topology maps of containerized applications. It enables operators to understand service dependencies, view live metrics, and execute management commands (pause, restart, stop containers, launch terminals) directly from a browser UI without leaving the visualization. Hybrid monorepo architecture: Go backend (app/ directory with collector.go, router.go, control_router.go handling topology aggregation and gRPC pipes; app/multitenant/ for multi-tenant cloud deployment with Consul/S3/SQS integrations); JavaScript/React frontend (43+ JS files using Redux + D3 for visualization, xterm for terminal emulation). Build targets: backend Dockerfile, frontend webpack bundle (see Makefile, package.json). State managed via Redux stores, topology data flows through collector → merger → API endpoints (api_topology.go).
👥Who it's for
DevOps engineers and SREs managing containerized microservices who need operational visibility into running Docker/Kubernetes infrastructure, topology discovery, and remote container management—particularly those lacking observability infrastructure or needing quick troubleshooting without CLI context-switching.
🌱Maturity & risk
DEPRECATED and no longer maintained (see README: 'THIS PROJECT IS NO LONGER MAINTAINED'). Despite deprecation, the project shows substantial maturity: 1.1M+ lines of Go, comprehensive test coverage (app/_test.go, app/multitenant/_test.go), CircleCI integration, and active production use by Apester, Deepfence, MayaData. However, no new feature development or critical security updates should be expected.
HIGH RISK for new deployments: project is officially unmaintained and will receive no security patches or dependency updates. Frontend dependencies (React 16.9, D3 modules, Redux ecosystem) are 4+ years old with known CVEs in xterm (3.14.5), lodash, and moment. Large dependency surface (42+ npm packages) with no active maintenance. Not suitable for new critical infrastructure; consider Prometheus + Grafana + Kubernetes Dashboard alternatives.
Active areas of work
No active development. Repository is archived/unmaintained per README. No recent commits visible in provided metadata. Historical uses: billing emitter (app/multitenant/billing_emitter.go), AWS collector integration, multi-tenant support suggest former cloud SaaS focus.
🚀Get running
Check README for instructions.
Daily commands: make (builds Go backend) && make client (npm build), then ./scope -help for options. Dev: npm start (webpack dev server on :4000) + go run ./cmd/scope (backend on :4040). Requires Docker daemon access and appropriate RBAC for Kubernetes probes.
🗺️Map of the codebase
app/collector.go— Core collector that gathers topology data from Docker/Kubernetes and manages the main data pipelineapp/router.go— Central message router that handles topology updates, control requests, and pipes data to connected clientsclient/app/scripts/components/app.js— Main React component serving as the application shell and entry point for the frontend SPAapp/api_topology.go— REST API endpoint that exposes topology data to clients, handling graph serialization and filteringclient/app/scripts/charts/nodes-chart.js— Primary D3-based visualization component rendering the interactive node graph and relationshipsMakefile— Build orchestration defining compilation, testing, and containerization of both backend Go services and frontend JavaScriptapp/multitenant/collector.go— Multitenant extension enabling Scope to operate as a hosted service with per-tenant data isolation
🛠️How to make changes
Add a new Topology Data Source
- Create a new reporter package implementing the Reporter interface for your data source (e.g., metrics, events, logs) (
app/collector.go) - Register the reporter in the collector's WireUpEndpoint() to start gathering data (
app/collector.go) - Emit topology.Node and topology.Edge objects with metadata into the topology graph (
app/merger.go) - New topology will be automatically serialized and served via GET /api/topology endpoint (
app/api_topology.go)
Add a new Control Action (e.g., custom kill/pause)
- Define the control ID and handler in controls.go mapping action names to executable functions (
app/controls.go) - Register the control with AppControls struct to make it discoverable to clients (
app/controls.go) - Router will dispatch incoming control requests to the handler via control_router.go (
app/control_router.go) - Expose control UI button in client React component and wire to request-actions.js (
client/app/scripts/actions/request-actions.js)
Add a new UI Panel or Detail View
- Create a new React component in client/app/scripts/components/ following the pattern of node-details.js (
client/app/scripts/components/node-details.js) - Connect component to Flux action dispatch in app-actions.js to listen for topology/node selection changes (
client/app/scripts/actions/app-actions.js) - Import and render the new component in the main app.js layout alongside existing panels (
client/app/scripts/components/app.js) - Data flows automatically from router WebSocket updates to component props via state stores
Deploy Scope to Kubernetes
- Build Docker image via 'make docker' which compiles backend and bundles frontend assets (
Makefile) - Push image to registry and deploy DaemonSet or Deployment manifest mounting Docker socket or Kubernetes API (
backend/Dockerfile) - Expose UI port (default 4040) and backend API port via Service, enable WebSocket upgrade in ingress (
app/pipes.go)
🔧Why these technologies
- Go backend (app/) — Efficient collection and routing of high-volume topology data from Docker/Kubernetes APIs with minimal overhead
- D3.js (client/app/scripts/charts/) — Powerful graph visualization engine for rendering force-directed topology layouts with real-time updates
- React (client/app/scripts/components/) — Component-based UI with Flux state management for complex interactive dashboard with node details and controls
- WebSocket pipes (app/pipes.go) — Bidirectional streaming of topology updates and control commands without polling overhead
- Consul/Memcache (app/multitenant/) — Distributed coordination and caching for horizontal scaling across multiple Scope agents in large clusters
⚖️Trade-offs already made
-
Stateful WebSocket connections instead of REST polling
- Why: Topology graph changes rapidly (pods starting/stopping); polling would cause high API load and stale UI
- Consequence: Requires stateful server; harder to scale horizontally without sticky sessions or distributed pub/sub (addressed via Consul)
-
Client-side D3 rendering instead of server-generated SVG/images
- Why: Enables interactive zoom, pan, node selection, and dynamic filtering without server round-trips
- Consequence: Large JavaScript bundle; requires modern browser; layout computation runs on user's CPU
-
Collector runs on every host (DaemonSet) vs central collection
- Why: Decouples monitoring from cluster topology; avoids single point of failure; scales with cluster
- Consequence: Data aggregation complexity; Merger must deduplicate/reconcile overlapping observations from multiple collectors
-
Topology as in-memory graph (merger.go) vs persistent database
- Why: Sub-second topology updates; low latency for real-time visualization; stateless API servers
- Consequence: Topology lost on server restart; horizontal scaling requires distributed state (Consul in multitenant mode)
🚫Non-goals (don't propose these)
- Long-term historical storage of topology changes
- Authentication and authorization (relies on network/reverse-proxy layer)
- Persistent storage of alerts or control audit logs
- Support for non-containerized infrastructure (VMs, bare metal)
- Real-time log streaming or application performance monitoring (distinct from Scope's network-centric view)
- Automatic remediation or self-healing actions based on detected issues
🪤Traps & gotchas
Project is DEPRECATED—do not file feature requests or expect security patches. Frontend uses React 16 (end-of-life); xterm 3.14.5 has known CVEs. Backend gRPC probes require network connectivity to container runtimes (no firewall isolation). Multitenant mode requires external Consul cluster, S3 bucket, SQS queue, memcache—misconfiguration silently fails. Kubernetes probe RBAC minimal: assumes cluster-admin or broad service account permissions. No structured logging; debugging requires Go pprof endpoints. Frontend state mutations via Redux middleware can cause race conditions under high-frequency updates (see merger_test.go for concurrency patterns). Build requires cgo (CGO_ENABLED=1) for some dependencies.
🏗️Architecture
💡Concepts to learn
- Topology Aggregation & Merging — Scope's core innovation is reconstructing application graphs from distributed probes; understanding app/merger.go's deduplication and edge-detection logic is essential for extending the collector
- gRPC Bidirectional Streaming — Scope probes use gRPC streaming to push metrics to the backend in real-time; app/pipe_router.go implements the pipe abstraction for probe connections
- Container Control Planes & Runtime Sockets — app/controls.go executes container operations via Docker/containerd sockets; understanding Unix domain sockets and container API versions is critical for extending control functionality
- Redux Immutable State Trees — Frontend uses redux-immutable for normalized topology state; graph mutations must be pure to enable time-travel debugging and prevent race conditions in high-frequency metric streams
- D3 Force-Directed Graph Layout — Scope's topology visualization uses d3-force (implicit in dagre + d3-selection) to position nodes; tweaking link distance, charge, and collision detection affects user interaction responsiveness
- Kubernetes API Watch Mechanism — Scope probes watch Kubernetes events (Pods, Services, Deployments) for topology changes; understanding list-and-watch protocol is needed to add new resource types (e.g., NetworkPolicies, Gateways)
- WebSocket Multiplexing & Pipe Routing — app/pipe_router.go routes multiple probe connections over single WebSocket; understanding the pipe protocol is essential for diagnosing connectivity issues or scaling multi-tenant deployments
🔗Related repos
prometheus/prometheus— Modern alternative for metrics collection and time-series storage; Scope's topology visualization complement in current observability stackskubernetes/kubernetes— Scope is a Kubernetes probe consumer; understanding API groups, watch mechanisms, and RBAC is required for extending Kubernetes supportgrafana/grafana— Replaces Scope's dashboard UI in modern setups; Scope's topology visualization can feed into Grafana via custom pluginsweaveworks/weave— Sibling project from Weaveworks; Scope integrates with Weave networking for service-to-service visualizationcontainerd/containerd— Container runtime abstraction; Scope probes integrate with containerd in addition to Docker for metric collection
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add unit tests for app/multitenant package with focus on AWS and Consul integrations
The multitenant package contains critical components (aws_collector.go, consul_client.go, consul_pipe_router.go, s3_client.go, sqs_control_router.go) but only has partial test coverage. The consul_pipe_router_internal_test.go and mock_consul_client_internal_test.go exist, but aws_collector.go, s3_client.go, billing_emitter_test.go (exists but likely incomplete), and memcache_client.go lack comprehensive unit tests. This is high-value for a distributed system handling multi-tenant scenarios.
- [ ] Create app/multitenant/aws_collector_test.go with tests for S3 bucket operations and error handling
- [ ] Create app/multitenant/s3_client_test.go with mocked S3 interactions and edge cases
- [ ] Create app/multitenant/memcache_client_test.go with cache hit/miss and serialization tests
- [ ] Expand app/multitenant/consul_pipe_router_internal_test.go to cover edge cases in pipe routing
- [ ] Ensure test coverage reports >80% for the multitenant package
Add GitHub Actions workflow for frontend static analysis and security scanning
The repo has CI/CD via CircleCI (evident from README badges), but the client/ directory has ESLint and Stylelint configs (.eslintrc, .stylelintrc) that could benefit from automated GitHub Actions. Currently, there's no visible frontend security scanning for npm dependencies (no npm audit or Snyk in CI). This prevents supply-chain attacks and catches regressions early.
- [ ] Create .github/workflows/frontend-lint.yml to run eslint on client/app/scripts and client code
- [ ] Create .github/workflows/npm-security.yml to run npm audit and fail on high/critical vulnerabilities
- [ ] Add Stylelint validation step in the frontend workflow for CSS files
- [ ] Document the new workflows in CONTRIBUTING.md with instructions for local testing
- [ ] Ensure workflows run on all PRs and relevant branches
Refactor and expand app/api_*.go endpoint tests to cover error cases and edge conditions
The API layer has test files (api_report_test.go, api_topologies_test.go, api_topology_test.go) but based on the file structure, they likely focus on happy paths. Given this is a distributed monitoring tool, robust error handling (network failures, malformed requests, timeout scenarios) is critical. Missing tests for api_topologies.go and api_topology.go endpoint validation could hide subtle bugs.
- [ ] Audit app/api_report_test.go, app/api_topologies_test.go, and app/api_topology_test.go for edge case coverage (nil inputs, empty responses, malformed JSON)
- [ ] Add benchmarks in app/benchmark_internal_test.go for high-throughput scenarios (app already has benchmark_internal_test.go stub)
- [ ] Add integration-style tests for app/pipes_internal_test.go covering pipe closure and error propagation
- [ ] Document test coverage gaps in CONTRIBUTING.md and add metrics to Makefile for coverage reporting
- [ ] Ensure app/controls_test.go covers concurrent control execution scenarios
🌿Good first issues
- Add unit tests for app/controls_test.go covering exec/restart/pause failures; currently minimal coverage on error paths like container-not-found or timeout scenarios.
- Document Multitenant Consul configuration in CONTRIBUTING.md with example docker-compose.yml; app/multitenant/consul_client.go lacks inline config docs and new contributors struggle with service discovery setup.
- Upgrade xterm from 3.14.5 to 4.x+ and audit React component bindings in frontend (likely static/app/scripts/components/terminal.js) to address CVE-2021-32804; current version blocks security scans.
⭐Top contributors
Click to expand
Top contributors
- @bboreham — 49 commits
- [@Daniel Holbach](https://github.com/Daniel Holbach) — 27 commits
- @dependabot[bot] — 12 commits
- @goku321 — 3 commits
- @morancj — 2 commits
📝Recent commits
Click to expand
Recent commits
12175b9— Add deprecation notice (#3917) (lasomethingsomething)477f678— Merge pull request #3911 from weaveworks/remove-weave-cloud (morancj)6273efc— Docs: remove Weave Cloud (morancj)79a26df— updating weave.cloud docs (#3910) (enekofb)bc058a7— Merge pull request #3902 from weaveworks/go-1-17-8 (bboreham)d95b823— Use weaveworks build image (bboreham)bcdf2ca— Make lint happy (bboreham)3fb47b9— Use build image from this branch (bboreham)87988e5— Update Go to 1.17.8; tidy build image (bboreham)a774857— Merge pull request #3863 from linux-on-ibm-z/scope_s390x (bboreham)
🔒Security observations
- High · Outdated and Vulnerable Dependencies —
client/package.json - dependencies section. Multiple dependencies have known vulnerabilities. Notably: lodash 4.17.21 (vulnerable to Prototype Pollution), moment 2.24.0 (ReDoS vulnerability), react 16.9.0 (outdated with known XSS issues), and xterm 3.14.5 (multiple CVEs). These are substantially out of date. Fix: Update all dependencies to latest secure versions. Conduct dependency audit with 'npm audit' and remediate high/critical issues. Consider using 'npm ci' for reproducible builds and implement automated dependency scanning in CI/CD. - High · Prototype Pollution Vulnerability in Lodash —
client/package.json - lodash dependency. lodash@4.17.21 contains a known Prototype Pollution vulnerability (CVE-2021-23337). Versions before 4.17.21 have even more critical issues. This could allow attackers to pollute Object prototypes and execute arbitrary code. Fix: Update lodash to version 4.17.21 or later. If already on 4.17.21, verify patches are applied and monitor for 4.17.22+ releases. - Medium · React XSS Vulnerability in Outdated Version —
client/package.json - react, react-dom dependencies. react@16.9.0 and react-dom@16.9.0 are significantly outdated (released 2019). Multiple XSS vulnerabilities and security improvements have been released since then, particularly in dangerouslySetInnerHTML handling and event handler validation. Fix: Upgrade React to version 18.x or latest stable version. Review codebase for any dangerouslySetInnerHTML usage and replace with safer alternatives like sanitization libraries. - Medium · Potential XSS Risk from styled-components —
client/package.json - styled-components dependency. styled-components@4.3.2 is outdated (2019 release). Later versions include security patches for CSS injection and XSS vulnerabilities through dynamic styling. Fix: Upgrade styled-components to version 5.3.x or later. Test dynamic styling to ensure no user input is unsanitized in style generation. - Medium · Deprecated Project Status —
README.md. The README indicates 'DEPRECATED: Weave Scope' suggesting the project is no longer actively maintained. Deprecated projects typically receive no security updates, leaving all dependencies vulnerable indefinitely. Fix: If this project is still in use, consider migrating to actively maintained alternatives or establish a security maintenance program. If deprecated, clearly communicate this to users and provide migration guidance. - Medium · Missing Security Headers Configuration —
app/ - server configuration files. No visible security header configuration (Content-Security-Policy, X-Frame-Options, X-Content-Type-Options, etc.) in the application structure. For a web monitoring tool, this is critical. Fix: Implement security headers in the server configuration (app/server_helpers.go or Dockerfile). Add CSP, X-Frame-Options: DENY, X-Content-Type-Options: nosniff, and Strict-Transport-Security headers. - Medium · Potential Unsafe Data Handling in Pipes and Routers —
app/pipes.go, app/router.go, app/control_router.go. Files like app/pipes.go, app/router.go, and app/control_router.go handle data routing and piping without visible input validation. These could be vectors for injection attacks if processing untrusted data. Fix: Review all data ingestion points for proper input validation and sanitization. Implement strict type checking and validate all external inputs before processing. - Low · Babel Configuration Security —
client/.babelrc and client/package.json. babel-plugin-lodash is included which modifies lodash imports at compile time. While generally safe, this increases build complexity and potential attack surface. Fix: Verify babel plugin is necessary. Consider using native ES6 imports instead and relying on tree-shaking. Keep Babel and all plugins updated. - Low · Debug Package in Production —
undefined. undefined Fix: undefined
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.