containrrr/watchtower
A process for automating Docker container base image updates.
Healthy across all four use cases
Permissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 5mo ago
- ✓20 active contributors
- ✓Apache-2.0 licensed
Show 4 more →Show less
- ✓CI configured
- ⚠Slowing — last commit 5mo ago
- ⚠Concentrated ownership — top contributor handles 59% of recent commits
- ⚠No test directory detected
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/containrrr/watchtower)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/containrrr/watchtower on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: containrrr/watchtower
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/containrrr/watchtower shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- Last commit 5mo ago
- 20 active contributors
- Apache-2.0 licensed
- CI configured
- ⚠ Slowing — last commit 5mo ago
- ⚠ Concentrated ownership — top contributor handles 59% of recent commits
- ⚠ No test directory detected
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live containrrr/watchtower
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/containrrr/watchtower.
What it runs against: a local clone of containrrr/watchtower — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in containrrr/watchtower | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 173 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of containrrr/watchtower. If you don't
# have one yet, run these first:
#
# git clone https://github.com/containrrr/watchtower.git
# cd watchtower
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of containrrr/watchtower and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "containrrr/watchtower(\\.git)?\\b" \\
&& ok "origin remote is containrrr/watchtower" \\
|| miss "origin remote is not containrrr/watchtower (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "main.go" \\
&& ok "main.go" \\
|| miss "missing critical file: main.go"
test -f "cmd/root.go" \\
&& ok "cmd/root.go" \\
|| miss "missing critical file: cmd/root.go"
test -f "internal/actions/check.go" \\
&& ok "internal/actions/check.go" \\
|| miss "missing critical file: internal/actions/check.go"
test -f "internal/actions/update.go" \\
&& ok "internal/actions/update.go" \\
|| miss "missing critical file: internal/actions/update.go"
test -f "pkg/container/client.go" \\
&& ok "pkg/container/client.go" \\
|| miss "missing critical file: pkg/container/client.go"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 173 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~143d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/containrrr/watchtower"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Watchtower is a Go daemon that automates Docker container base image updates by monitoring registries, pulling new images when available, and gracefully restarting containers with their original deployment options. It solves the problem of keeping containerized applications up-to-date without manual intervention, functioning as a lightweight alternative to Kubernetes for homelab and development environments. Monolithic Go application: cmd/root.go and cmd/notify-upgrade.go are CLI entry points using Cobra/Viper for configuration; core watchtower logic lives in the main package; docs/ contains MkDocs-style markdown documentation; dockerfiles/ contains multiple Dockerfile variants for self-contained distributions; tests use Ginkgo BDD framework.
👥Who it's for
Homelab enthusiasts, media center operators, and local development teams who run Docker containers and want automatic base image updates without the complexity of Kubernetes. Not recommended for commercial/production use per the README.
🌱Maturity & risk
The project is abandoned as of 2024 — the README explicitly states 'This project is no longer maintained' with discussion at https://github.com/containrrr/watchtower/discussions/2135. It reached v1.x maturity with CI/CD via GitHub Actions (CircleCI, CodeQL), codecov integration, and strong test coverage (Ginkgo/Gomega), but is no longer actively developed.
Critical risk: project is unmaintained. Dependencies are pinned to Go 1.20 and Docker client v24.0.7 (from 2023), meaning dependency rot is accelerating. No recent commits visible, and security patches won't be applied. The codebase has 298KB of Go code across many Docker-touching subsystems (networking, credentials, lifecycle hooks) that require ongoing maintenance.
Active areas of work
Nothing — project is in maintenance-only mode with no active development. The GitHub Actions workflows (pull-request.yml, release.yml, codeql-analysis.yml) exist but no recent activity is visible. Dependencies are frozen and the team has moved on per the pinned discussion.
🚀Get running
git clone https://github.com/containrrr/watchtower.git
cd watchtower
go build -o watchtower ./cmd
# Run locally (requires Docker socket access)
./watchtower --help
Or use the pre-built Docker image: docker run containrrr/watchtower
Daily commands:
go run ./cmd/root.go --help
# Or after building:
./watchtower --interval 3600 --cleanup
# Docker:
docker run -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower
🗺️Map of the codebase
main.go— Entry point that initializes the Watchtower application and coordinates the main execution flow.cmd/root.go— Defines the root Cobra command and all CLI flags, essential for understanding configuration options.internal/actions/check.go— Core logic for checking container images against registries to detect available updates.internal/actions/update.go— Implements the container update execution logic, pulling new images and recreating containers.pkg/container/client.go— Docker API client wrapper that abstracts all container and image operations from the rest of the codebase.pkg/api/api.go— HTTP API server for remote management and metrics exposure, critical for non-daemon deployments.internal/flags/flags.go— Flag and configuration parsing that connects CLI arguments to runtime behavior across all components.
🛠️How to make changes
Add a new CLI flag and configuration option
- Define the flag in cmd/root.go with Cobra BindPFlag() call to connect CLI argument to viper configuration (
cmd/root.go) - Add the corresponding struct field and validation logic in internal/flags/flags.go (
internal/flags/flags.go) - Reference the configuration value in the relevant action or component that needs it (
internal/actions/check.go or internal/actions/update.go)
Add a new notification type or notification channel
- Use the existing shoutrrr integration in cmd/notify-upgrade.go as a reference for notification dispatch (
cmd/notify-upgrade.go) - Add CLI flags for the new notification type in cmd/root.go following the pattern of existing notification options (
cmd/root.go) - Integrate notification sending into internal/actions/update.go after a successful container update (
internal/actions/update.go)
Add a new HTTP API endpoint for the API server
- Create a new handler file in pkg/api/ (e.g., pkg/api/status/status.go) following the structure of pkg/api/update/update.go (
pkg/api/update/update.go) - Register the route in the HTTP server in pkg/api/api.go where routes are currently defined (
pkg/api/api.go) - Add corresponding Prometheus metrics in pkg/api/metrics/metrics.go if the endpoint should be tracked (
pkg/api/metrics/metrics.go)
Add support for a new container filtering criteria
- Add the filtering flag to cmd/root.go and validate it in internal/flags/flags.go (
cmd/root.go) - Implement the filtering logic in pkg/container/container.go as a method on the Container model (
pkg/container/container.go) - Apply the filter in internal/actions/check.go when selecting which containers to check for updates (
internal/actions/check.go)
🔧Why these technologies
- Go 1.20 + Docker SDK — Lightweight, fast, and direct Docker API access for automation; single binary deployment model suits containerized monitoring
- Cobra CLI framework — Mature, feature-rich command framework for managing complex flag hierarchies and subcommands with strong viper integration
- Prometheus client_golang — Industry-standard metrics exposition format for integration with existing monitoring stacks (Grafana, Prometheus)
- Cron (robfig/cron) — Declarative scheduling syntax allows users to define update frequency without custom time-based logic
- Shoutrrr — Unified notification abstraction supporting multiple channels (Slack, Discord, email, webhooks) from a single integration point
⚖️Trade-offs already made
-
Monolithic daemon process rather than event-driven or sidecar pattern
- Why: Simpler operational model: single container with cron scheduling is more predictable and requires no external state
- Consequence: Limited to pull-based update checks at fixed intervals; cannot react in real-time to base image updates published by registries
-
Sequential update execution rather than parallel container upgrades
- Why: Avoids resource contention, simplifies rollback, and reduces risk of cascading failures in dependent containers
- Consequence: Longer total update duration when many containers require updates; slower recovery time for multi-container applications
-
Label-based container filtering (com.watchtower.enable, etc.) rather than inclusion lists
- Why: Opt-in safety model: only containers explicitly tagged participate in automation, reducing accidental updates
- Consequence: Requires container authors to add labels; cannot update unmaintained containers without relabeling
-
Stateless check/update logic with no persistent tracking of update history
- Why: Simplifies deployment (no database dependency) and makes the tool idempotent across restarts
- Consequence: Cannot replay failed updates or provide detailed update history; relies on external logging/metrics for audit trails
🚫Non-goals (don't propose these)
- Real-time base image update notification (poll-based only, no registry webhooks)
- Orchestration of dependent container update order (users must manage dependencies via labels)
- Authentication for the HTTP API (must be run with firewall/reverse proxy protection)
- Persistent storage of update history or audit logs (output only via logs or external metrics)
- Rollback of failed updates (relies on image retention policies)
🪤Traps & gotchas
Docker socket requirement: Watchtower MUST have access to /var/run/docker.sock (or DOCKER_HOST env var); it won't work remotely without TCP socket setup. Label-based filtering: Container updates are opt-in via com.centurylinklabs.watchtower.enable=true label or --run-once flag; confusing default behavior. Graceful shutdown timing: Uses container stop signals (configurable in docs/stop-signals.md); too-short timeouts cause ungraceful kills. Registry auth: Private registries require Docker credentials in ~/.docker/config.json or DOCKER_CONFIG env var; not documented as prominently as it should be. Abandoned status: No support, no security patches — using this in 2024+ is risky without forking.
🏗️Architecture
💡Concepts to learn
- Container Lifecycle Management — Watchtower must safely pause, replace, and restart containers; understanding stop signals, health checks, and graceful shutdown is core to avoiding data loss
- OCI Image Manifest & Registry APIs — Watchtower polls registries to detect new image versions; knowing how image digests, tags, and manifest schemas work prevents false positives
- Docker Credential Store & Auth — Private registry access requires understanding Docker's credential helpers and config.json format; this is a frequent source of 'permission denied' errors
- Cron Scheduling & Clock-Driven Daemons — Watchtower uses robfig/cron to schedule periodic checks; understanding cron expressions and daemon lifecycle prevents missed or double updates
- Prometheus Metrics Instrumentation — Watchtower exposes Prometheus metrics for monitoring update success/failure; integration with Grafana dashboards (referenced in docs/) requires understanding metric types
- Docker Container Labels & Metadata — Watchtower uses labels (com.centurylinklabs.watchtower.enable) to control which containers update; understanding label-based filtering is essential for production deployments
- Process Signal Handling (SIGTERM, SIGKILL) — Watchtower must gracefully handle container stop signals and respect configured grace periods; mishandling causes data loss or hung updates
🔗Related repos
moby/moby— The Docker daemon itself — Watchtower wraps the Docker Go SDK which communicates with thiscontainrrr/shoutrrr— Notification library that Watchtower uses for all backends (Slack, Discord, Telegram, etc.); separate maintained projectdistribution/distribution— Docker registry reference implementation; Watchtower polls OCI/Docker registries using the distribution APIsrobfig/cron— Cron scheduler library Watchtower uses for periodic update checks; lightweight alternative to system cronkubernetes/kubernetes— The recommended production alternative — README explicitly advises users to use K8s or MicroK8s instead of Watchtower for serious workloads
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive integration tests for remote-hosts functionality
The docs/remote-hosts.md file indicates support for remote Docker hosts, but the internal/actions directory lacks dedicated test coverage for remote connection scenarios. This is critical for a tool that manages containers across multiple hosts, as connection failures could silently fail or cause data loss.
- [ ] Review internal/actions/ for existing remote host handling code
- [ ] Create internal/actions/remote_host_test.go with tests for SSH/TLS connection validation
- [ ] Add tests for credential validation and Docker socket path resolution across different host types
- [ ] Add tests for connection failure fallback behavior
- [ ] Update .github/workflows/pull-request.yml if needed to ensure integration tests run
Implement missing lifecycle hooks test suite
docs/lifecycle-hooks.md documents pre/post-update hooks, but there are no dedicated test files for this critical feature in internal/actions/. Hooks can modify container behavior during updates, so comprehensive testing is essential to prevent regressions.
- [ ] Create internal/actions/lifecycle_hooks_test.go
- [ ] Add tests for hook execution order (pre-update, post-update)
- [ ] Add tests for hook failure scenarios and rollback behavior
- [ ] Add tests for hook timeout and cancellation handling
- [ ] Verify hooks are tested with the mocks already present in internal/actions/mocks/
Add GitHub Actions workflow for testing against multiple Docker versions
The repo targets Docker 24.0.7 in go.mod, but watchtower needs to support multiple Docker API versions for backward compatibility. Currently, there's no CI workflow testing against different Docker/Moby versions, which could cause silent compatibility regressions.
- [ ] Create .github/workflows/docker-compatibility.yml
- [ ] Configure matrix testing for Docker 20.10 LTS, 23.x, and 24.x versions
- [ ] Use docker/setup-buildx-action and docker-in-docker for test environment
- [ ] Run existing test suite from .github/workflows/pull-request.yml against each Docker version
- [ ] Add badge to README.md showing compatibility matrix status
🌿Good first issues
- Add test coverage for
cmd/notify-upgrade.go— the notify-upgrade subcommand exists but has no visible unit tests in the file list; write Ginkgo specs that mock the Shoutrrr integration. - Document the
.devbots/lock-issue.ymlbot configuration and add examples to CONTRIBUTING.md — the GitHub Actions bot config exists but is unexplained; clarify how maintainers use it for triage. - Create a
docs/examples/directory with docker-compose.yml files for common setups (with Portainer, with private registry, with lifecycle hooks) — docs exist but no runnable examples.
⭐Top contributors
Click to expand
Top contributors
- @dependabot[bot] — 59 commits
- @piksel — 15 commits
- @simskij — 5 commits
- @allcontributors[bot] — 4 commits
- @jebabin — 2 commits
📝Recent commits
Click to expand
Recent commits
ca0e86e— Add maintenance notice to README (simskij)76f9cea— chore(deps): bump github.com/prometheus/client_golang from 1.17.0 to 1.18.0 (#1894) (dependabot[bot])7ba3049— chore(deps): bump github/codeql-action from 2 to 3 (#1886) (dependabot[bot])af3ad21— chore(deps): bump github.com/spf13/viper from 1.18.1 to 1.18.2 (#1885) (dependabot[bot])0a14f3a— chore(deps): bump github.com/spf13/viper from 1.17.0 to 1.18.1 (#1874) (dependabot[bot])588ba43— chore(deps): bump alpine from 3.18.5 to 3.19.0 in /dockerfiles (#1875) (dependabot[bot])9411b37— chore(deps): bump actions/setup-python from 4 to 5 (#1877) (dependabot[bot])2d7735f— fix(ci): fix incorrect actions config (#1872) (piksel)6b57003— chore(deps): bump alpine from 3.18.4 to 3.18.5 in /dockerfiles (#1871) (dependabot[bot])01fd38b— chore: fix json syntax error in .all-contributorsrc (#1867) (PeterDaveHello)
🔒Security observations
- High · Outdated Go Version —
go.mod. The project targets Go 1.20, which is no longer supported. Go 1.20 reached end-of-life in August 2024. Using outdated Go versions means missing critical security patches and bug fixes that are only available in newer versions. Fix: Update to the latest stable Go version (1.21+). Modify 'go 1.20' to 'go 1.23' or latest available version and rebuild all dependencies. - High · Hardcoded API Token in Docker Compose —
docker-compose.yml (line: watchtower command). The docker-compose.yml file contains a hardcoded HTTP API token 'demotoken' in the watchtower service command. This exposes credentials in version control and configuration files, making it accessible to anyone with repository access. Fix: Remove hardcoded tokens from docker-compose.yml. Use environment variables or Docker secrets instead. Pass tokens via environment files or secrets management systems at runtime. - High · Project Maintenance Status - No Longer Maintained —
README.md. The README explicitly states 'This project is no longer maintained' with reference to a discussion about discontinuation. Using unmaintained software means no security patches, vulnerability fixes, or updates will be provided. Fix: Consider forking the project for continued maintenance, or migrate to an actively maintained alternative for container image update automation. - High · Potentially Vulnerable Docker and CLI Versions —
go.mod (docker and docker/cli dependencies). The project uses docker v24.0.7 and docker/cli v24.0.7 (from July 2023). These are significantly outdated versions that may contain known vulnerabilities. Docker has released multiple security updates since these versions. Fix: Update Docker and Docker CLI dependencies to the latest available versions. Run 'go get -u github.com/docker/docker@latest' and 'go get -u github.com/docker/cli@latest' and test thoroughly. - Medium · HTTP API Exposed Without TLS in Development Setup —
docker-compose.yml (ports section). The docker-compose.yml exposes port 8080 (HTTP API) without requiring TLS/HTTPS. The API accepts a token but transmits it over unencrypted HTTP, making it vulnerable to man-in-the-middle attacks in development and potentially in production if misconfigured similarly. Fix: Enable TLS for all API endpoints. Use HTTPS instead of HTTP. Implement certificate-based authentication. Document secure production deployment patterns. - Medium · Docker Socket Mounted with Read-Only Access —
docker-compose.yml (watchtower volumes). While the docker-compose.yml mounts the Docker socket as read-only (':ro'), mounting the Docker socket grants significant privileges. Even read-only access to docker.sock allows inspection of all containers and images, which could be a privacy concern. Fix: Document the security implications. Consider using Docker API with restricted authentication instead of socket access. Implement least-privilege access controls and audit logging. - Medium · Missing Security Headers in HTTP API —
cmd/root.go and internal/actions (HTTP API implementation). No evidence of security headers (HSTS, Content-Security-Policy, X-Frame-Options, etc.) being set in the HTTP API implementation based on the file structure review. Fix: Implement standard security headers in all HTTP responses. Use middleware to add headers like X-Content-Type-Options, X-Frame-Options, Strict-Transport-Security, and Content-Security-Policy. - Medium · No SBOM or Dependency Verification —
goreleaser.yml and build configuration. The project lacks Software Bill of Materials (SBOM) generation and doesn't show evidence of dependency verification mechanisms or supply chain security measures. Fix: Implement SBOM generation in release pipeline. Use go.sum verification. Consider adding SLSA provenance. Document dependency audit procedures. - Low · Debug Mode in Docker Compose Example —
docker-compose.yml (watchtower. The docker-compose.yml includes '--debug' flag in the watchtower command. While useful for development, this example should not encourage debug mode in production deployments. Fix: undefined
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.