docker-archive/classicswarm
Swarm Classic: a container clustering system. Not to be confused with Docker Swarm which is at https://github.com/docker/swarmkit
Healthy across all four use cases
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓20 active contributors
- ✓Distributed ownership (top contributor 40% of recent commits)
- ✓Apache-2.0 licensed
Show all 6 evidence items →Show less
- ✓CI configured
- ✓Tests present
- ⚠Stale — last commit 6y ago
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/docker-archive/classicswarm)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/docker-archive/classicswarm on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: docker-archive/classicswarm
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/docker-archive/classicswarm shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- 20 active contributors
- Distributed ownership (top contributor 40% of recent commits)
- Apache-2.0 licensed
- CI configured
- Tests present
- ⚠ Stale — last commit 6y ago
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live docker-archive/classicswarm
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/docker-archive/classicswarm.
What it runs against: a local clone of docker-archive/classicswarm — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in docker-archive/classicswarm | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 2186 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of docker-archive/classicswarm. If you don't
# have one yet, run these first:
#
# git clone https://github.com/docker-archive/classicswarm.git
# cd classicswarm
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of docker-archive/classicswarm and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "docker-archive/classicswarm(\\.git)?\\b" \\
&& ok "origin remote is docker-archive/classicswarm" \\
|| miss "origin remote is not docker-archive/classicswarm (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f "cluster/swarm/cluster.go" \\
&& ok "cluster/swarm/cluster.go" \\
|| miss "missing critical file: cluster/swarm/cluster.go"
test -f "api/server.go" \\
&& ok "api/server.go" \\
|| miss "missing critical file: api/server.go"
test -f "cluster/cluster.go" \\
&& ok "cluster/cluster.go" \\
|| miss "missing critical file: cluster/cluster.go"
test -f "cli/cli.go" \\
&& ok "cli/cli.go" \\
|| miss "missing critical file: cli/cli.go"
test -f "cluster/engine.go" \\
&& ok "cluster/engine.go" \\
|| miss "missing critical file: cluster/engine.go"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 2186 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~2156d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/docker-archive/classicswarm"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Docker Swarm Classic is a native container clustering system that pools multiple Docker hosts into a single virtual host using an API proxy layer. It predates Docker Swarm mode (built into Docker Engine 1.12+) and provides container orchestration by intercepting Docker API calls and distributing containers across a cluster of Docker daemons. Monolithic single-binary design: api/ contains the HTTP server and API handlers (primary.go, replica.go for HA), cluster/ contains the clustering logic (cluster.go orchestrates engines, container scheduling, event monitoring via event_monitor.go), and cli/ wraps the management interface. The server proxies Docker API calls from clients to appropriate engines in the cluster.
👥Who it's for
DevOps engineers and system administrators managing Docker deployments before Swarm mode became standard (2014–2018 era). Users who need to orchestrate containers across multiple hosts with a unified API endpoint rather than managing individual Docker daemons.
🌱Maturity & risk
This project is archived and no longer actively maintained. It was Docker's original container orchestration solution (2014–2018) and is feature-complete but not receiving updates. The README explicitly warns users to migrate to Docker Swarm mode or alternative orchestration systems. The test coverage is present (handlers_test.go, engine_test.go, config_test.go) and CI via Travis CI was configured, but the lack of recent commits signals end-of-life status.
High risk for new projects: This is archived software with no active maintenance. No dependency lock file visible in the file list means dependency versions are unconstrained. The codebase depends on deprecated Docker APIs (the whole API proxy strategy became obsolete when Swarm mode was released). Single maintainer stewardship risk is extreme given the archived status.
Active areas of work
Nothing—this repository is archived. The last activity was archival and documentation updates. No active development, open PRs, or milestones are expected.
🚀Get running
git clone https://github.com/docker-archive/classicswarm.git
cd classicswarm
# Go 1.x required (no go.mod visible, likely older Go vendoring or GOPATH)
go build -o swarm ./cmd/swarm # inferred entry point
Check the Dockerfile for the exact build process used in CI.
Daily commands:
./swarm manage -H :3375 consul://consul-host:8500 # typical manager startup pointing to Consul for discovery
# OR
./swarm join --advertise=host:2375 consul://consul-host:8500 # join node to cluster
Exact flags in cli/flags.go and cli/manage.go. Requires a running Docker daemon and Consul (or Zookeeper) for discovery.
🗺️Map of the codebase
cluster/swarm/cluster.go— Core cluster orchestration logic that implements container scheduling, resource management, and the primary coordination between swarm manager and Docker engines.api/server.go— API server entry point that proxies Docker API requests to the swarm cluster; essential for understanding how external clients interact with the system.cluster/cluster.go— Cluster abstraction layer managing engine discovery, health monitoring, and event handling across the docker daemon pool.cli/cli.go— Command-line interface entry point showing how operators manage swarm creation, joining, and node discovery.cluster/engine.go— Docker engine abstraction representing individual hosts in the cluster; handles container operations and resource tracking.discovery/token/token.go— Token-based discovery mechanism for swarm node bootstrap and cluster membership management.
🛠️How to make changes
Add a new scheduling constraint or filter
- Define filter logic in engine_sorter_test.go to validate expected ranking behavior (
cluster/engine_sorter_test.go) - Implement filter method on Engine type in engine.go (e.g., HasLabel, MatchesConstraint) (
cluster/engine.go) - Add filter application to PlaceContainer or relevant scheduling method in cluster/swarm/cluster.go (
cluster/swarm/cluster.go) - Update CLI flags in cli/flags.go to expose the new constraint option to users (
cli/flags.go)
Add a new Docker API endpoint proxy
- Add test case in api/handlers_test.go for the new endpoint behavior (e.g., GET /info) (
api/handlers_test.go) - Implement handler function in api/handlers.go that dispatches to swarm cluster or individual engines (
api/handlers.go) - Register the route in api/server.go by adding it to the server's mux or router setup (
api/server.go) - If multi-engine aggregation is needed, add sorter logic in api/sorter.go or implement custom response building (
api/sorter.go)
Add support for a new discovery backend
- Create a new package under discovery/ with the backend implementation (e.g., discovery/consul/) (
discovery/token/token.go) - Implement the discovery interface (likely with Fetch or Register methods similar to token.go) (
discovery/token/token.go) - Register the new backend in cli/cli.go or cluster/options.go with a discovery scheme selector (
cli/cli.go) - Update cluster initialization in cluster/swarm/cluster.go to instantiate the correct discovery backend (
cluster/swarm/cluster.go)
Add a new container constraint or filter
- Define the constraint struct and parsing logic in cluster/options.go (
cluster/options.go) - Add evaluation method on Container or Engine type in cluster/container.go (
cluster/container.go) - Integrate constraint check into PlaceContainer scheduling in cluster/swarm/cluster.go (
cluster/swarm/cluster.go) - Expose constraint via CLI flag in cli/flags.go or cli/create.go for user input (
cli/flags.go)
🔧Why these technologies
- Go — Compiled language enabling fast clustering operations, easy Docker daemon integration via goroutines, and minimal runtime overhead on manager nodes.
- HTTP API proxy pattern — Transparent to Docker clients; allows swarm manager to intercept and route API calls to appropriate engines without modifying client code.
- Token-based discovery — Simple bootstrap mechanism for operator-initiated cluster formation without requiring external infrastructure; can be extended to Consul, etcd, etc.
- Event-driven monitoring — Real-time awareness of engine state changes (container starts/stops) enables reactive scheduling and failure detection without polling overhead.
⚖️Trade-offs already made
-
Single-manager architecture (primary/replica pattern with optional HA via external state store)
- Why: Simplicity and consistency; avoids split-brain in multi-manager scenarios for a first-generation orchestrator
- Consequence: Manager is a single point of failure; requires external backup or manual recovery; scales to hundreds of engines but not thousands
-
API request distribution via sorter and engine selection at request time
- Why: Stateless, simple proxying logic that works with any Docker API client without custom bindings
- Consequence: No query optimization or caching across requests; per-request overhead for multi-engine aggregation (e.g., listing all containers)
-
No built-in service mesh or overlay networking (relies on host network or external overlay)
- Why: Keeps swarm manager lightweight; delegates networking to Docker's native driver ecosystem
- Consequence: Cross-host container communication requires additional setup (overlay networks, external routing); no automatic service discovery DNS
-
Affinity and constraint system based on labels and inspect metadata
- Why: Uses Docker's native constructs (container labels, engine labels) for portability and simplicity
- Consequence: Limited expressiveness compared to Kubernetes CRDs; constraints are static per-request, not dynamic policies
🚫Non-goals (don't propose these)
- Persistent cluster state management (state is ephemeral; relies on external etcd or Consul for HA)
- Multi-tenancy or namespace isolation (no RBAC, quota, or security policies built-in)
- Real-time metrics and monitoring dashboard (events are forwarded, not aggregated)
- Rolling updates or blue-green deployments (no service abstraction or update orchestration)
- Automatic load balancing or service discovery (no built-in DNS or load balancer integration)
- Windows container orchestration (primarily Linux-focused, though some Windows API handlers exist)
🪤Traps & gotchas
No discovery backend included: You must supply Consul, Zookeeper, or etcd; the code only integrates with them (see cli/join.go). State is ephemeral: No persistent storage layer; cluster state lives in memory and is lost on restart unless recovered from discovery backend. Old Docker API: Assumes Docker API versions 1.18–1.24; incompatible with modern Docker Engine API. No graceful shutdown logic visible: Killing the manager may leave orphaned container state. TLS/auth: Early versions had minimal security; certificates and auth flags exist but may not be production-hardened.
🏗️Architecture
💡Concepts to learn
- API Proxy Pattern — Swarm Classic's core mechanism: the manager intercepts Docker API calls and routes them to appropriate engines, making a cluster appear as a single daemon
- Leader Election / Primary-Replica Replication — Used in api/primary.go and api/replica.go to ensure cluster consistency; one manager is primary (accepts writes), others are replicas (read-only or standby)
- Event-Driven State Synchronization — cluster/event_monitor.go streams Docker daemon events; the cluster reacts to container state changes in real-time rather than polling
- Bin Packing / First-Fit Scheduler — engine_sorter.go implements container placement strategies (CPU/memory constraints, affinity rules) to pack containers efficiently across engines
- Service Discovery via Consul/Zookeeper — Swarm relies on external discovery backends (Consul, Zk) to register engines, elect managers, and store cluster metadata; critical for cluster bootstrap
- HTTP Multiplexing / Request Routing — api/handlers.go and api/server.go route incoming Docker API requests to the correct engine based on container affinity or load-balancing rules
- Filtering & Constraints (Placement) — cluster/engine_sorter.go evaluates engines against labels, constraints, and filters to enforce placement policies (e.g., 'run only on GPU nodes')
🔗Related repos
docker/swarmkit— The spiritual successor to Swarm Classic; built into Docker Engine 1.12+ as 'Swarm mode' with service-level orchestration and native supportdocker/docker— The Docker Engine itself; Swarm Classic acts as a proxy to docker/docker daemon API callshashicorp/consul— Primary discovery and state backend used by Swarm Classic for cluster membership and leader electioncontainerd/containerd— Modern container runtime that supersedes the direct Docker daemon approach; relevant for understanding why Swarm Classic's architecture became obsoletekubernetes/kubernetes— Alternative container orchestration system that emerged post-Swarm Classic; represents the industry direction for multi-host container management
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive integration tests for cluster/swarm/cluster.go
The cluster/swarm/cluster.go file lacks corresponding cluster_test.go coverage. Given this is the core clustering logic for Swarm Classic, comprehensive tests would validate multi-node scheduling, failover behavior, and container distribution across the pool. This is critical for an archived project to ensure stability and help new contributors understand the clustering behavior.
- [ ] Review cluster/swarm/cluster.go to identify untested functions (scheduling, node selection, event handling)
- [ ] Create comprehensive test cases in cluster/swarm/cluster_test.go covering: multi-node scenarios, container placement strategies, node failure scenarios
- [ ] Add benchmarks for the scheduling algorithm to establish performance baselines
- [ ] Run tests against the existing test suite to ensure no regressions
Add GitHub Actions workflow to replace Travis CI and validate Go 1.13+ compatibility
The repo uses .travis.yml for CI but is archived and may not be actively monitored. GitHub Actions would provide native CI/CD within the repository. Additionally, the .godir file suggests older Go tooling; validating against modern Go versions (1.13+) would help contributors understand compatibility and catch deprecation issues early.
- [ ] Create .github/workflows/test.yml with Go test matrix (versions 1.13, 1.17, 1.20+)
- [ ] Add linting job using golangci-lint to catch code quality issues
- [ ] Include cross-platform testing for linux and windows (api/server_windows.go suggests Windows support)
- [ ] Document Go version requirements in README.md
Refactor api/handlers.go into separate handler modules and add missing HTTP endpoint tests
api/handlers.go likely contains multiple endpoint handlers without dedicated unit tests (handlers_test.go exists but may have incomplete coverage). This core API file would benefit from splitting by concern (container handlers, node handlers, etc.) and comprehensive HTTP integration tests for each endpoint, especially given the api/mockclient and api/nopclient test utilities already exist.
- [ ] Analyze api/handlers.go to identify distinct handler groups (container, image, network, volume, swarm)
- [ ] Create separate files: api/handlers_container.go, api/handlers_swarm.go, etc.
- [ ] Write table-driven HTTP integration tests in api/handlers_test.go covering success/error paths, edge cases, and response formats
- [ ] Verify api/mockclient is used effectively in new tests; enhance if needed for better test isolation
🌿Good first issues
- Add test coverage for
api/sorter.go(missing sorter_test.go)—implement unit tests for the sorting/filtering pipeline used in engine selection - Document the discovery backend integration in
cli/join.gowith concrete examples for Consul, Zookeeper, and etcd—currently there's minimal inline documentation - Add missing error handling in
api/handlers.gofor malformed container requests—many handlers return generic errors instead of Docker-compatible error responses
⭐Top contributors
Click to expand
Top contributors
- @nishanttotla — 40 commits
- @thaJeztah — 15 commits
- @dperny — 10 commits
- @justincormack — 7 commits
- [@Dani Louca](https://github.com/Dani Louca) — 6 commits
📝Recent commits
Click to expand
Recent commits
3cc018d— Merge pull request #2986 from justincormack/archive (justincormack)8653f6a— Archive this project (justincormack)859ccbc— Merge pull request #2984 from justincormack/sa (justincormack)60d21c4— Fix docs license ref (justincormack)40001e9— Merge pull request #2983 from thaJeztah/restore_docs (justincormack)24bfb2f— Update other markdown files to include "classic" (thaJeztah)f0030c9— Restore titles for GitHub (thaJeztah)9ea2224— Remove docs-specific metadata (thaJeztah)c17c5c6— Update docs with latest version from the docs repository (thaJeztah)62df7e9— Revert "Remove old documentation source, add README on migration (#2508)" (thaJeztah)
🔒Security observations
This is an archived, unmaintained Docker Swarm Classic project with multiple critical security concerns. The use of outdated Go (1.12.4), exposure of the API port without default authentication, and lack of active maintenance make this unsuitable for production use. The primary recommendation is immediate migration to actively maintained alternatives like Docker Swarm mode or Kubernetes. If any legacy systems still depend on this code, implement strict network isolation, enforce TLS/mTLS, and apply external security controls.
- High · Outdated Go Version in Dockerfile —
Dockerfile (ARG GO_VERSION=1.12.4). The Dockerfile uses Go 1.12.4 (released June 2019), which is significantly outdated and no longer receives security updates. This version predates numerous critical security fixes in the Go runtime and standard library. Fix: Update to a modern, actively maintained Go version (1.21 or later). Review the Go release notes for security patches applied since 1.12.4. - High · Exposed API Port Without Authentication —
Dockerfile (ENV SWARM_HOST :2375, EXPOSE 2375). The application exposes port 2375 (SWARM_HOST :2375) without apparent authentication or TLS enforcement by default. This is a Docker API port that could allow unauthenticated remote code execution if exposed to untrusted networks. Fix: Enforce TLS/mTLS by default. Require client certificates and disable the TCP socket without authentication. Document secure deployment practices in README. - High · Archived Project with No Active Maintenance —
README.md. The project is explicitly archived and no longer actively developed. This means security vulnerabilities will not be patched, creating inherent risk for any production deployments. Fix: Migrate to actively maintained orchestration solutions such as Docker Swarm mode (built into Docker Engine) or Kubernetes. Do not use in production environments. - Medium · Missing Security Headers and TLS Configuration Documentation —
api/handlers.go, docs/configure-tls.md. The codebase includes api/handlers.go suggesting an HTTP API server, but there is no evidence of security headers, CORS validation, or comprehensive TLS configuration guidance in the repository. Fix: Add security headers (HSTS, X-Frame-Options, X-Content-Type-Options, CSP). Document and enforce strict TLS configurations. Implement request validation and rate limiting. - Medium · Potential Command Injection in Build Process —
Dockerfile (git rev-parse --short HEAD). The Dockerfile uses git rev-parse in an ldflags command without input validation. While git is trusted, this pattern could be vulnerable if similar patterns exist elsewhere with untrusted input. Fix: Use build arguments instead of inline shell commands when possible. Validate and sanitize all external inputs in build processes. - Medium · Missing Vulnerability Scanning in CI/CD —
.travis.yml. The .travis.yml file is present but no evidence of automated security scanning, dependency vulnerability checks, or container image scanning is apparent in the file structure. Fix: Integrate tools like Trivy, Snyk, or OWASP Dependency-Check into CI/CD pipeline. Add container image scanning before publishing. - Low · Overly Permissive Alpine Package Management —
Dockerfile (RUN apk add/del commands). The Dockerfile uses 'apk del .build-deps' to clean up, but there's no explicit 'apk update' or security scan performed before the build, and no explicit pinning of package versions. Fix: Explicitly run 'apk update' before installing packages. Consider pinning package versions. Use minimal base images and regular scanning for vulnerabilities. - Low · Volume Permission Concerns —
Dockerfile (VOLUME $HOME/.swarm). The Dockerfile defines VOLUME $HOME/.swarm without specifying ownership or permission constraints, which could lead to privilege escalation if not properly managed at runtime. Fix: Document volume mount security practices. Specify explicit user/group ownership and restrictive file permissions in deployment guidelines.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.