weaveworks/weave
Simple, resilient multi-host containers networking and more.
Healthy across all four use cases
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓18 active contributors
- ✓Apache-2.0 licensed
- ✓CI configured
Show all 6 evidence items →Show less
- ✓Tests present
- ⚠Stale — last commit 2y ago
- ⚠Concentrated ownership — top contributor handles 67% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/weaveworks/weave)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/weaveworks/weave on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: weaveworks/weave
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/weaveworks/weave shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- 18 active contributors
- Apache-2.0 licensed
- CI configured
- Tests present
- ⚠ Stale — last commit 2y ago
- ⚠ Concentrated ownership — top contributor handles 67% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live weaveworks/weave
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/weaveworks/weave.
What it runs against: a local clone of weaveworks/weave — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in weaveworks/weave | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 666 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of weaveworks/weave. If you don't
# have one yet, run these first:
#
# git clone https://github.com/weaveworks/weave.git
# cd weave
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of weaveworks/weave and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "weaveworks/weave(\\.git)?\\b" \\
&& ok "origin remote is weaveworks/weave" \\
|| miss "origin remote is not weaveworks/weave (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f "api/weaveapi.go" \\
&& ok "api/weaveapi.go" \\
|| miss "missing critical file: api/weaveapi.go"
test -f "ipam/allocator.go" \\
&& ok "ipam/allocator.go" \\
|| miss "missing critical file: ipam/allocator.go"
test -f "nameserver/nameserver.go" \\
&& ok "nameserver/nameserver.go" \\
|| miss "missing critical file: nameserver/nameserver.go"
test -f "net/bridge.go" \\
&& ok "net/bridge.go" \\
|| miss "missing critical file: net/bridge.go"
test -f "ipam/paxos/paxos.go" \\
&& ok "ipam/paxos/paxos.go" \\
|| miss "missing critical file: ipam/paxos/paxos.go"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 666 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~636d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/weaveworks/weave"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Weave Net is a virtual networking layer for Docker containers that creates flat Layer 2 networks across multiple hosts without requiring explicit port mappings or service discovery configuration. It enables portable microservices to communicate as if plugged into the same physical switch, with built-in DNS and IPAM, handling networking complexity across single hosts, multi-host clusters, and hybrid cloud deployments. Monolithic architecture: api/ exposes HTTP endpoints for DNS/IPAM/routing; common/ provides shared utilities including Docker client bindings and process management; db/ abstracts state via BoltDB; networking logic spreads across core Go packages orchestrated by Makefile. Shell scripts in bin/ handle deployment and testing; Docker-based build pipeline in build/.
👥Who it's for
DevOps engineers and containerization teams deploying multi-host Docker clusters who need transparent networking between containers without manual port mapping, service mesh overhead, or cloud-provider lock-in. Contributors are primarily Go developers building container infrastructure at Weaveworks.
🌱Maturity & risk
Production-ready and actively maintained. The project shows 8M+ downloads, established CI/CD via CircleCI (.circleci/config.yml), comprehensive test suites, and long-term support evidenced by the structured release process (bin/release). However, the codebase reflects Go 1.14 era patterns and appears to have shifted focus toward newer Weave products.
Low-to-moderate risk for stability but moderate for adoption velocity. Dependencies are somewhat dated (Docker API v17.12, libnetwork from 2018, boltdb instead of modern stores), and the last visible commit activity suggests slower iteration than cloud-native alternatives. The core networking logic is well-tested but tight coupling to Docker daemon internals via common/docker/client.go creates long-term maintenance risk.
Active areas of work
The repo is in maintenance mode rather than active feature development. The CircleCI config and structured test suite indicate robust CI/CD, but the file structure and dependencies suggest focus has shifted to newer Weave products. Issue tracking and PR activity would reveal current priorities, but based on the codebase age, core networking is stable with incremental improvements.
🚀Get running
git clone https://github.com/weaveworks/weave.git
cd weave
make
The Makefile drives the build. For development iteration, see build/build.sh for containerized builds and bin/multiweave for local multi-host testing.
Daily commands:
make
# Launches Weave daemon and establishes overlay network. Test via:
weave launch
weave connect <peer-ip>
weave dns-lookup <container-name>
See Makefile and bin/ scripts for multi-host Vagrant-based testing: bin/provision_test_vms.sh.
🗺️Map of the codebase
api/weaveapi.go— Main API entry point that routes all HTTP requests to DNS, IPAM, and router services.ipam/allocator.go— Core IP address allocation logic using distributed Paxos consensus; essential for multi-host networking.nameserver/nameserver.go— DNS service implementation for container discovery; critical for service connectivity.net/bridge.go— Bridge network configuration and management; fundamental to virtual network creation.ipam/paxos/paxos.go— Distributed consensus protocol ensuring consistent IP allocation across all peers.Makefile— Build orchestration and release process; required to understand compilation and deployment.
🛠️How to make changes
Add a new IPAM endpoint
- Define the HTTP handler in api/ipam.go following existing patterns (GET/POST methods) (
api/ipam.go) - Register the route in api/weaveapi.go's mux setup or HandleFunc calls (
api/weaveapi.go) - Implement the core business logic in ipam/allocator.go if it modifies allocation state (
ipam/allocator.go) - Add corresponding HTTP tests in ipam/http_test.go (
ipam/http_test.go)
Add a new DNS record type or resolution rule
- Extend the entry structure in nameserver/entry.go with new fields (
nameserver/entry.go) - Update DNS resolution logic in nameserver/dns.go to handle the new type (
nameserver/dns.go) - Expose via HTTP API in nameserver/http.go if user-facing (
nameserver/http.go) - Add test cases in nameserver/dns_test.go (
nameserver/dns_test.go)
Implement network policy chain rules
- Define policy structures and parsing in common/chains/npc.go (
common/chains/npc.go) - Integrate chain evaluation into packet filtering logic (net/bridge.go or network layer) (
net/bridge.go) - Expose policy API endpoints in api/router.go (
api/router.go)
Add persistent state migration or schema change
- Modify the schema or data structure (e.g., in ipam/allocator.go or nameserver/nameserver.go) (
ipam/allocator.go) - Implement migration logic in db/boltdb.go if using BoltDB buckets (
db/boltdb.go) - Add version tracking or backwards-compatibility checks (
ipam/allocator.go) - Test migration path with allocator_test.go or similar (
ipam/allocator_test.go)
🔧Why these technologies
- Go 1.14 — Lightweight goroutines enable efficient concurrent handling of multiple container networking tasks; cross-platform compilation for diverse architectures.
- Paxos consensus (ipam/paxos/) — Ensures distributed IP allocation consistency across multiple hosts without a central authority; tolerates peer failures.
- BoltDB (db/boltdb.go) — Embedded key-value store for fast, persistent state without external dependencies; simple operational model.
- Docker SDK — Direct integration with Docker daemon for container lifecycle events, network namespace access, and metadata retrieval.
- iptables (coreos/go-iptables) — Kernel-level packet filtering for network policy enforcement and traffic control.
⚖️Trade-offs already made
-
Distributed consensus via Paxos rather than centralized etcd or Consul
- Why: Reduces external dependencies and operational complexity; peer-to-peer architecture fits container networking use case.
- Consequence: Requires careful handling of network partitions; Paxos implementation is complex and harder to debug than centralized approaches.
-
Virtual overlay network (bridges) rather than underlay/BGP routing
- Why: Works across any underlying infrastructure (cloud, bare metal, mixed); containers are portable without infrastructure changes.
- Consequence: Slight encapsulation overhead; relies on VXLAN or similar tunneling, adding latency compared to native routing.
-
In-process DNS server (nameserver/) rather than external resolver
- Why: Low latency and tight integration with local IPAM state; no external service dependency.
- Consequence: DNS zone management is local-only; multi-host DNS queries require gossip/replication.
-
Ring-based IP space tracking (ipam/ring/) for allocation
- Why: Efficient O(1) allocation in most cases; compact representation for large address spaces.
- Consequence: Fragmentation over time as IPs are allocated/released; may require periodic defragmentation.
🚫Non-goals (don't propose these)
- Does not provide authentication or encryption at network layer (relies on host kernel isolation and optional plugin encryption).
- Does not support non-Docker container runtimes (focused on Docker; other runtimes would require separate integration).
- Does not handle persistent storage orchestration or volume management.
- Not a load balancer or service mesh (basic DNS discovery only).
- Does not support Windows containers natively (Linux-focused architecture).
🪤Traps & gotchas
Host kernel requirements: Weave requires Linux kernel features (VXLAN, iptables, network namespaces); macOS/Windows need Vagrant VMs. Docker daemon coupling: The implementation is tightly bound to Docker's libnetwork API; upgrading Docker versions can break compatibility. BoltDB limitations: Single-file embedded database has poor concurrency; high-traffic clusters may experience lock contention. CNI vs Docker mode: The repo supports both Docker-native and CNI modes with different code paths; mixing them on same host causes conflicts. No explicit Go module pin tracking visible: Uses older Gopkg.toml/Gopkg.lock (dep-era); verify transitive dependencies manually.
🏗️Architecture
💡Concepts to learn
- VXLAN (Virtual Extensible LAN) — Weave's core overlay network mechanism; understanding VXLAN tunneling over UDP is essential for troubleshooting network latency and packet loss
- IPAM (IP Address Management) — Weave's distributed IPAM in
api/ipam.gocoordinates subnet allocation across peers without central authority; critical for understanding how it avoids collisions in decentralized clusters - Network Namespaces (Linux) — Kernel isolation primitive that Weave leverages to give each container its own network stack; understanding namespaces is prerequisite for debugging networking issues
- iptables Rule Chains — Weave implements network policies via iptables chains (
common/chains/npc.go); requires understanding netfilter hook points and rule traversal order - Gossip Protocol — Weave uses gossip-based peer discovery and topology dissemination for decentralized coordination; enables resilience without a central control plane
- libnetwork (Docker Networking) — Docker's pluggable networking layer that Weave integrates with via
common/docker/client.go; deprecated in newer Docker/Moby but still core to this codebase - BoltDB (Embedded Key-Value Store) — Weave's state persistence mechanism in
db/boltdb.go; understanding transaction semantics and bucket hierarchies is needed for modifying persistent state handling
🔗Related repos
containernetworking/cni— Container Networking Interface specification that Weave implements; required reading for understanding the plugin architecture in this repoprojectcalico/calico— Alternative multi-host container networking using BGP instead of VXLAN; primary competitor solving the same multi-host networking problemflannel/flannel— Simpler alternative to Weave offering basic Layer 3 overlay networks; often chosen for Kubernetes deployments where Weave's Layer 2 is overkillmoby/moby— Docker daemon codebase; Weave tightly integrates via libnetwork, so understanding moby's container lifecycle is essential for debugging integration issuesweaveworks/weave-gitops— Companion project from Weaveworks; GitOps orchestration tool often deployed alongside Weave Net for full cluster management
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add unit tests for IPAM allocator logic (ipam/allocate.go and ipam/allocator.go)
The IPAM (IP Address Management) module is critical for Weave Net's core functionality, but there are no visible test files in the ipam/ directory. Given the complexity of IP allocation, deallocation, and conflict resolution in multi-host environments, comprehensive unit tests would catch regressions early and improve maintainability. This is a high-value contribution that directly impacts reliability.
- [ ] Create ipam/allocate_test.go with tests for allocation logic (happy path, edge cases, concurrent allocation)
- [ ] Create ipam/allocator_test.go with tests for allocator state management and persistence
- [ ] Add tests for race conditions in concurrent IP allocation scenarios
- [ ] Ensure tests cover integration with db/boltdb.go for persistence
- [ ] Run tests locally and verify coverage is >80% for ipam package
Add GitHub Actions workflow for multi-arch builds (arm64, ppc64le, s390x support)
The repo has build infrastructure for multiple architectures (qemu-aarch64, qemu-arm, qemu-ppc64le, qemu-s390x SHA256 files in build/shasums/), but the CI config (.circleci/config.yml) appears to be CircleCI-focused. Adding a GitHub Actions workflow for building and testing on multiple architectures would improve accessibility for contributors and catch architecture-specific bugs. This aligns with the existing multi-arch build support evident in the codebase.
- [ ] Create .github/workflows/multi-arch-build.yml with matrix strategy for linux/amd64, linux/arm64, linux/arm/v7, linux/ppc64le, linux/s390x
- [ ] Integrate with existing build/Dockerfile and build/build.sh scripts
- [ ] Add step to verify qemu-static binaries match checksums in build/shasums/
- [ ] Include basic smoke tests from bin/circle-test-smoke for each architecture
- [ ] Document in CONTRIBUTING.md how to run multi-arch builds locally
Add API integration tests for weaveapi.go endpoints (dns, ipam, router)
The api/ directory exposes critical endpoints (api/dns.go, api/ipam.go, api/router.go, api/weaveapi.go) but no test files exist for these API handlers. Given that Weave Net is a networked system with external API consumers, integration tests validating request/response contracts are essential. This would prevent breaking changes and improve API stability for users.
- [ ] Create api/weaveapi_test.go with HTTP tests for main API endpoints
- [ ] Create api/dns_test.go with tests for DNS resolution endpoints (mocking DNS backend)
- [ ] Create api/ipam_test.go with tests for IP allocation/release API calls
- [ ] Create api/router_test.go with tests for router state endpoint
- [ ] Add table-driven tests for error cases (invalid input, missing resources, internal errors)
- [ ] Use httptest package for isolated testing without external dependencies
🌿Good first issues
- Add integration tests for DNS lookup failures under high container churn—
api/dns.golacks explicit timeout and retry test coverage; seebin/circle-test-smokefor test infrastructure. - Document IPAM subnet planning guide with worked examples—
api/ipam.goalgorithm is sound but no runbook exists for operators choosing cluster-wide CIDR layouts; add toCONTRIBUTING.mdwith examples. - Extract iptables rule generation into testable pure functions—
common/chains/npc.gocurrently applies rules directly; refactoring to separate rule composition from iptables execution would enable deterministic testing without root/mocking.
⭐Top contributors
Click to expand
Top contributors
- @bboreham — 67 commits
- [@Daniel Holbach](https://github.com/Daniel Holbach) — 5 commits
- @murali-reddy — 5 commits
- @enekofb — 3 commits
- @foot — 3 commits
📝Recent commits
Click to expand
Recent commits
8c84763— Issues/3964 - added how to encrypt traffic (#3965) (enekofb)25ccf44— updated / removed references to weave cloud (#3962) (enekofb)d30828a— updated docs to dont use weave cloud (#3961) (enekofb)6d7a29a— Merge pull request #3946 from rajch/master (kingdonb)1686ded— Apply suggestions from code review (rajch)9a5e1e3— Update docs to include possible containerd issue. (rajch)e371215— Merge pull request #3905 from weaveworks/suspend-external-bug-bounty (Daniel Holbach)53b12b6— Merge pull request #3907 from dholbach/remove-preview-build (Daniel Holbach)5367e0e— Remove preview build (Daniel Holbach)77445d0— Suspend bug bounty program (morancj)
🔒Security observations
- High · Outdated Go Module Version —
go.mod. The project uses Go 1.14 which reached end-of-life in August 2021. This version no longer receives security updates and patches for discovered vulnerabilities. Fix: Update to a currently supported Go version (1.21 or later). Review and update all dependencies to versions compatible with the new Go version. - High · Multiple Outdated and Vulnerable Dependencies —
go.mod (dependencies section). Several dependencies have known vulnerabilities and are significantly outdated: docker/docker (17.12.0-rc1 from 2018), boltdb/bolt (no longer maintained), aws-sdk-go (v1.13.22 from 2017), and others. These packages contain known security issues. Fix: Audit and update all dependencies to their latest stable versions. Use 'go get -u' and run 'go mod tidy'. Perform security scanning with tools like 'go list -json -m all | nancy sleuth' or 'govulncheck'. - High · Use of Unmaintained BoltDB —
go.mod, db/boltdb.go. boltdb/bolt is archived and no longer maintained. While functional, it will not receive security updates or bug fixes. Fix: Consider migrating to an actively maintained key-value store or use a maintained fork like etcd/bbolt which is the official successor. - Medium · Docker Client Connection Security —
common/docker/client.go, common/http.go. The codebase includes Docker client integration (common/docker/client.go, fsouza/go-dockerclient dependency). Depending on implementation, this could expose Docker socket vulnerabilities or unencrypted communication. Fix: Ensure Docker connections use TLS verification. Validate that socket permissions are restrictive (600 or similar). Never run with --insecure-registry or skip-tls-verify in production. - Medium · IPAM and Network Configuration Exposure —
api/ipam.go, ipam/http.go. The IPAM (IP Address Management) API exposed via api/ipam.go may handle sensitive network configuration. If not properly authenticated and authorized, this could allow unauthorized network manipulation. Fix: Implement strong authentication (mTLS, token-based) for all IPAM endpoints. Enforce authorization checks before any allocation/deallocation operations. Audit all network-modifying endpoints. - Medium · DNS Service Potential Vulnerabilities —
api/dns.go, miekg/dns v1.0.4. DNS functionality (api/dns.go, miekg/dns dependency) could be vulnerable to DNS poisoning, cache poisoning, or amplification attacks if not properly hardened. Fix: Validate and sanitize all DNS query inputs. Implement rate limiting for DNS queries. Use authenticated DNS queries where applicable. Update miekg/dns to latest version. - Medium · SOCKS5 Implementation Risk —
go.mod (armon/go-socks5 dependency). armon/go-socks5 dependency is used but appears to be for networking features. SOCKS5 implementations can be vulnerable to injection attacks if not properly validated. Fix: Ensure all SOCKS5 traffic is validated and sanitized. Limit SOCKS5 usage to trusted networks. Monitor for suspicious SOCKS5 activity. - Medium · AWS SDK Security Posture —
go.mod, dependencies using AWS SDK. aws-sdk-go v1.13.22 from 2017 contains known vulnerabilities and lacks modern security features. AWS credentials could be mishandled. Fix: Update to AWS SDK v2 (latest). Implement credential rotation. Use IAM roles instead of long-lived credentials. Enable request signing verification. - Low · Missing Security Headers Configuration —
common/http.go, api/weaveapi.go. The HTTP server implementation (common/http.go) may not include security headers like HSTS, CSP, or X-Frame-Options, increasing vulnerability to certain attack classes. Fix: Add security headers to all HTTP responses: Strict
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.