RepoPilotOpen in app →

dropwizard/metrics

:chart_with_upwards_trend: Capturing JVM- and application-level metrics. So you know what's going on.

Healthy

Healthy across all four use cases

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 4d ago
  • 5 active contributors
  • Apache-2.0 licensed
Show all 6 evidence items →
  • CI configured
  • Tests present
  • Single-maintainer risk — top contributor 91% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/dropwizard/metrics)](https://repopilot.app/r/dropwizard/metrics)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/dropwizard/metrics on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: dropwizard/metrics

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/dropwizard/metrics shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 4d ago
  • 5 active contributors
  • Apache-2.0 licensed
  • CI configured
  • Tests present
  • ⚠ Single-maintainer risk — top contributor 91% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live dropwizard/metrics repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/dropwizard/metrics.

What it runs against: a local clone of dropwizard/metrics — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in dropwizard/metrics | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch release/4.2.x exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 34 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>dropwizard/metrics</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of dropwizard/metrics. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/dropwizard/metrics.git
#   cd metrics
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of dropwizard/metrics and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "dropwizard/metrics(\\.git)?\\b" \\
  && ok "origin remote is dropwizard/metrics" \\
  || miss "origin remote is not dropwizard/metrics (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify release/4.2.x >/dev/null 2>&1 \\
  && ok "default branch release/4.2.x exists" \\
  || miss "default branch release/4.2.x no longer exists"

# 4. Critical files exist
test -f "metrics-annotation/src/main/java/com/codahale/metrics/annotation/Counted.java" \\
  && ok "metrics-annotation/src/main/java/com/codahale/metrics/annotation/Counted.java" \\
  || miss "missing critical file: metrics-annotation/src/main/java/com/codahale/metrics/annotation/Counted.java"
test -f "metrics-annotation/src/main/java/com/codahale/metrics/annotation/Gauge.java" \\
  && ok "metrics-annotation/src/main/java/com/codahale/metrics/annotation/Gauge.java" \\
  || miss "missing critical file: metrics-annotation/src/main/java/com/codahale/metrics/annotation/Gauge.java"
test -f "metrics-annotation/src/main/java/com/codahale/metrics/annotation/ExceptionMetered.java" \\
  && ok "metrics-annotation/src/main/java/com/codahale/metrics/annotation/ExceptionMetered.java" \\
  || miss "missing critical file: metrics-annotation/src/main/java/com/codahale/metrics/annotation/ExceptionMetered.java"
test -f "pom.xml" \\
  && ok "pom.xml" \\
  || miss "missing critical file: pom.xml"
test -f ".github/workflows/maven.yml" \\
  && ok ".github/workflows/maven.yml" \\
  || miss "missing critical file: .github/workflows/maven.yml"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 34 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~4d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/dropwizard/metrics"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Dropwizard Metrics is a Java library for capturing JVM-level and application-level metrics with minimal overhead. It provides pluggable reporters (JSON, CSV, Graphite) to export metrics like gauges, counters, histograms, meters, and timers, enabling production monitoring and performance analysis without intrusive instrumentation. Multi-module Maven monorepo: core metrics engine in metrics-core, with separate modules for reporters (metrics-graphite, metrics-jvm, etc. inferred from typical structure), extensive documentation in docs/ (Sphinx-based with Python requirements.txt), and CI workflows in .github/workflows/. Language mix: 1.76M Java lines (core), minimal Shell/Python (build scripts, docs generation).

👥Who it's for

Java backend engineers and DevOps teams building production systems who need lightweight, low-latency metrics collection without the overhead of heavier APM solutions. Specifically users of Dropwizard framework (which bundles it) and standalone applications requiring comprehensive JVM observability.

🌱Maturity & risk

Highly mature and production-ready: actively maintained in 4.2.x branch (current stable, 🟢), with 1.76M lines of Java code, comprehensive CI/CD via GitHub Actions (see .github/workflows/maven.yml, release.yml), and clear versioning strategy. Version 5.0.x is in pause for backwards-incompatible tag support. This is a widely-used reference implementation in the Java metrics ecosystem.

Low risk for 4.2.x (actively maintained), but multi-branch structure means version fragmentation—4.0.x, 4.1.x are unmaintained. 5.0.x pause status suggests architectural rethinking in progress. No visible dependency bloat or security issues in the readme, but contributors should monitor SECURITY.md and dependabot.yaml for vulnerability updates. Single repo for multiple major versions could introduce confusion.

Active areas of work

Active maintenance on release/4.2.x branch with snapshot version 4.2.39-SNAPSHOT. GitHub Actions workflows (assign-milestone.yml, close_stale.yml, maven.yml, trigger-release.yml) show ongoing CI/CD automation and release management. Dependabot enabled for dependency updates. Version 5.0.x development paused, focusing on backwards-incompatible features like tag support.

🚀Get running

git clone https://github.com/dropwizard/metrics.git && cd metrics && mvn clean install (or mvn -v to verify Maven 3.x installed via .mvn/wrapper). For docs: cd docs && pip install -r requirements.txt && make html to build documentation locally.

Daily commands: This is a library, not a runnable app—build with: mvn clean package. For docs: mvn -pl docs clean site generates HTML. CI runs mvn clean verify (see .github/workflows/maven.yml) with full test suite. No server to start; consume as Maven dependency: <dependency><groupId>io.dropwizard.metrics</groupId><artifactId>metrics-core</artifactId><version>4.2.39</version></dependency>

🗺️Map of the codebase

  • metrics-annotation/src/main/java/com/codahale/metrics/annotation/Counted.java — Core annotation interface for counting metric events; foundational for metrics-driven instrumentation across the codebase
  • metrics-annotation/src/main/java/com/codahale/metrics/annotation/Gauge.java — Gauge annotation for tracking instantaneous values; essential for understanding how metrics capture application state
  • metrics-annotation/src/main/java/com/codahale/metrics/annotation/ExceptionMetered.java — Exception metering annotation; critical for observability patterns around error tracking in the framework
  • pom.xml — Parent Maven configuration defining dependency versions, build plugins, and release processes for all modules
  • .github/workflows/maven.yml — CI/CD pipeline definition for running tests and validating builds; essential for understanding contribution requirements
  • checkstyle.xml — Code style enforcement rules; every contributor must follow these standards
  • docs/source/manual/core.rst — Core metrics documentation explaining counters, gauges, histograms, and timers—the primary abstractions

🛠️How to make changes

Add a New Metric Annotation

  1. Create a new annotation interface in metrics-annotation module following @Counted/@Gauge pattern (metrics-annotation/src/main/java/com/codahale/metrics/annotation/YourMetric.java)
  2. Define retention, target, and documentation matching existing annotation conventions (metrics-annotation/pom.xml)
  3. Add unit tests validating annotation detection and metric recording (metrics-annotation/src/test/java/com/codahale/metrics/annotation/YourMetricTest.java)
  4. Document usage in the manual section (docs/source/manual/core.rst)

Add a New Reporter (e.g., for a backend like CloudWatch)

  1. Create metrics-cloudwatch module with pom.xml extending metrics-parent (metrics-cloudwatch/pom.xml)
  2. Implement Reporter interface iterating over registry metrics (metrics-cloudwatch/src/main/java/com/codahale/metrics/cloudwatch/CloudWatchReporter.java)
  3. Implement unit and integration tests (metrics-cloudwatch/src/test/java/com/codahale/metrics/cloudwatch/CloudWatchReporterTest.java)
  4. Create documentation file describing configuration and example usage (docs/source/manual/cloudwatch.rst)

Add JVM Metric Collection for a New Runtime Component

  1. Add gauge registration in metrics-jvm module for the new component (e.g., NativeMemory) (metrics-jvm/src/main/java/com/codahale/metrics/jvm/JvmAttributeGaugeSet.java)
  2. Implement measurement logic via ManagementFactory or OperatingSystemMXBean (metrics-jvm/src/main/java/com/codahale/metrics/jvm/YourComponentGaugeSet.java)
  3. Add unit tests validating metric retrieval and correctness (metrics-jvm/src/test/java/com/codahale/metrics/jvm/YourComponentGaugeSetTest.java)
  4. Document new gauges in JVM metrics guide (docs/source/manual/jvm.rst)

Integrate Metrics with a New Servlet Framework

  1. Create metrics-servlet-xyz module with servlet/filter dependencies (metrics-servlet-xyz/pom.xml)
  2. Implement Filter/Listener to intercept requests and record timing/status metrics (metrics-servlet-xyz/src/main/java/com/codahale/metrics/servlet/XyzMetricsFilter.java)
  3. Add configuration examples and registration patterns (metrics-servlet-xyz/src/main/java/com/codahale/metrics/servlet/XyzMetricsListener.java)
  4. Create documentation with integration examples (docs/source/manual/servlet-xyz.rst)

🔧Why these technologies

  • Java annotations (@Counted, @Gauge, @Timed, etc.) — Enable declarative, non-invasive metric instrumentation without boilerplate; leverages standard Java introspection at compile/runtime
  • ManagementFactory / MXBeans (GarbageCollectorMXBean, MemoryMXBean, ThreadMXBean) — Standard JVM APIs for accessing runtime metrics without external dependencies; available in any JDK
  • Maven multi-module build with BOM (Bill of Materials) — Decouples core metrics engine from reporters (Graphite, Prometheus, etc.); allows selective dependency on needed integrations
  • Sphinx documentation with reStructuredText — Industry-standard for Java/Python projects; integrates with ReadTheDocs for automated publishing; version-specific branches (4.2.x, 5.0.x)
  • GitHub Actions CI/CD with Maven Wrapper — Standardizes builds across environments; automated releases to Maven Central; reproducible dependency resolution

⚖️Trade-offs already made

  • In-memory metric storage with optional periodic reporters vs. streaming/push to external system

    • Why: Minimizes overhead in critical paths; metrics are polled on-demand rather than always-on pub/sub
    • Consequence: Reporter processes must pull snapshot of entire registry; loss of metrics if JVM crashes before reporter flushes
  • Annotation-based instrumentation via proxy/reflection vs. bytecode weaving (AspectJ)

    • Why: Simpler dependency model; no compile-time weaving; works with Spring/CDI frameworks
    • Consequence: Requires explicit method decoration; cannot retroactively instrument third-party libraries without source changes
  • Separate metrics-annotation module from metrics-core

    • Why: Allows lightweight annotation-only imports for APIs that only expose metrics contracts without runtime overhead
    • Consequence: Dual classpath management; annotation processor must be paired with actual metrics collection
  • Per-metric atomic state (e.g., AtomicLong in Counter) vs. lock-free data structures

    • Why: Standard concurrent Java primitives; widely understood; acceptable contention for most workloads
    • Consequence: High-frequency metrics (>100k/s) on single counter may experience lock contention; recommend sharded counters for extreme cases

🚫Non-goals (don't propose these)

  • Not a metrics storage backend—delegates to Graphite, Prometheus, CloudWatch, etc. via reporters
  • Not a log aggregation framework—complements (not replaces) application logging
  • Not real-time alerting engine—only collects and exports data; alerting logic lives in backend systems
  • Not distributed tracing—designed for local JVM metrics; use Jaeger/Zipkin for request tracing
  • Does not provide authentication/TLS for reporters—assumes network is secured by ops layer

🪤Traps & gotchas

Multi-branch versioning: release/4.2.x is current stable, but release/5.0.x exists (paused) and older branches (3.x, 4.0.x, 4.1.x) are unmaintained—easy to accidentally PR to wrong branch. Maven wrapper (.mvn/) handles JDK/Maven setup, so don't assume system Maven. Docs build requires Python 3 + Sphinx (docs/requirements.txt), separate from Java build. Version 5.0.x requires new Maven coordinates and package names (breaking change)—don't backport 5.x features to 4.2.x.

🏗️Architecture

💡Concepts to learn

  • Metric Registry — Central registry pattern (likely MetricRegistry singleton) is how all metrics are collected, named, and accessed—core architectural pattern you'll interact with constantly.
  • Reporter Pattern — Pluggable reporters (Graphite, JSON, CSV, SLF4J) decouple metric collection from export; understanding this pattern is key to extending the library.
  • Histogram (T-Digest/HDR Histogram) — Histograms track distribution of values (latencies) without storing every sample; critical for percentile monitoring in production without memory overhead.
  • Meter (Rate Limiting / Exponential Moving Average) — Meters track event rates (requests/sec) with one/five/fifteen-minute moving averages; essential for understanding throughput trends.
  • JVM Instrumentation — Metrics captures GC stats, thread pools, heap usage via java.lang.management APIs; understanding JVM internals helps interpret metric output.
  • Low-Latency Collection — Metrics prioritizes near-zero overhead for application code (non-blocking, lock-free updates); critical design constraint affecting internal implementation choices.
  • Graphite/Carbon Protocol — Metrics can export plaintext metrics to Graphite; understanding the plaintext protocol (metric_name value timestamp) is required for debugging integration issues.
  • micrometer-metrics/micrometer — Modern alternative metrics facade supporting JVM, Micrometer Tracing, and multiple backends (Prometheus, Graphite, CloudWatch); gradually replacing Dropwizard Metrics in newer projects.
  • prometheus/client_java — Official Prometheus Java client; used by teams migrating from Dropwizard Metrics to Prometheus-native instrumentation.
  • dropwizard/dropwizard — Parent framework that bundles Dropwizard Metrics as its primary instrumentation library; see metrics integration in dropwizard-core and dropwizard-servlets modules.
  • Netflix/spectator — Netflix's in-house metrics library with similar goals (low-overhead JVM metrics) but different API design and Netflix-specific reporters (Atlas, CloudWatch).
  • metrics-clojure/metrics-clojure — Clojure wrapper around Dropwizard Metrics; shows idiomatic non-Java integration patterns for JVM languages.

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add Maven Central Badge and Version Matrix to README.md

The README.md is truncated and missing the complete 'Versions' table that should list different version branches (e.g., release/4.2.x, release/4.1.x, etc.) with their documentation links and support status. This is critical for users to understand which version to use and where to find docs for older versions. The file structure shows multiple release branches exist but the README doesn't document them clearly.

  • [ ] Complete the 'Versions' section in README.md with a table showing: Version | Source Branch | Documentation | Status (Active/LTS/EOL)
  • [ ] Add links to documentation for each major version branch
  • [ ] Reference docs/source content and .readthedocs.yaml configuration to ensure consistency
  • [ ] Test that all links in the version matrix are valid and point to correct ReadTheDocs versions

Create GitHub Actions workflow for automated documentation deployment

The repo has .readthedocs.yaml config and docs/source structure with Sphinx setup (Makefile, requirements.txt, theme), but there's no explicit GitHub Actions workflow for building and validating documentation on PRs. Currently only maven.yml, release.yml, trigger-release.yml, and close_stale.yml exist in .github/workflows/. Adding a docs validation workflow would catch broken links and build errors before merge.

  • [ ] Create .github/workflows/docs.yml that triggers on PRs modifying docs/source/**
  • [ ] Add build step using 'sphinx-build' (documented in docs/Makefile) to validate RST syntax
  • [ ] Include link checker (e.g., sphinx-linkcheck) to catch broken documentation references
  • [ ] Configure workflow to comment on PRs with build status or attach artifact showing built HTML
  • [ ] Reference docs/requirements.txt for pip dependencies needed in CI

Add integration tests for metrics-core with Java module system (JPMS)

The pom.xml shows explicit javaModuleName properties defined (e.g., 'com.codahale.metrics.docs'), indicating the project supports Java 9+ module system. However, based on the file structure provided, there's no evidence of dedicated test modules or CI workflows validating module-info.java compatibility. A dedicated JPMS integration test would ensure the metrics library works correctly when used as a named module.

  • [ ] Create a new test module under metrics-core (e.g., metrics-core-jpms-tests/) with proper module-info.java
  • [ ] Write integration tests that verify: metrics-core can be required in module-info.java, proper exports are visible, no split package issues exist
  • [ ] Update .github/workflows/maven.yml to run tests with --add-modules and --limit-modules flags to validate module graph
  • [ ] Document module usage in docs/source/ (add new .rst file for JPMS guide if not present)

🌿Good first issues

  • Add missing integration test for metrics-graphite reporter with Graphite mock server (no test server visible in file list for graphite module).
  • Improve docs/source/ coverage for JVM metrics sub-module (GC metrics, memory pool tracking)—add concrete examples showing how to instrument thread pools.
  • Create migration guide docs/source/ showing 4.2.x → 5.0.x breaking changes (package name, Maven coordinates, tag support) once 5.0.x exits pause.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 53740cd — fix(deps): update dependency com.github.ben-manes.caffeine:caffeine to v3.2.4 (#5212) (renovate[bot])
  • cc6f5bb — chore(deps): update dependency maven to v3.9.15 (#5209) (renovate[bot])
  • c5a1b3e — chore(deps): update actions/cache action to v5.0.5 (#5208) (renovate[bot])
  • b1110ec — fix(deps): update jetty monorepo to v12.1.8 (#5203) (renovate[bot])
  • a2ba00f — fix(deps): update dependency net.bytebuddy:byte-buddy to v1.18.8-jdk5 (#5202) (renovate[bot])
  • 727f0e6 — fix(deps): update log4j2 monorepo to v2.25.4 (#5200) (renovate[bot])
  • db35bcb — chore(deps): update actions/cache action to v5.0.4 (#5193) (renovate[bot])
  • e5fc801 — chore(deps): update dependency maven to v3.9.14 (#5189) (renovate[bot])
  • 9d4e99c — chore(deps): update dependency maven to v3.9.13 (#5181) (renovate[bot])
  • 450d872 — fix(deps): update jetty monorepo to v12.1.7 (#5184) (renovate[bot])

🔒Security observations

The Dropwizard Metrics project demonstrates reasonable security practices with a defined security policy and responsible disclosure process. However, there are areas for improvement including formalizing the vulnerability disclosure program, expanding version support windows, and ensuring documentation dependencies are regularly audited. The codebase analysis is limited by the provided file structure; a full source code review would be needed to assess injection risks, credential handling, and other code-level vulnerabilities. No obvious hardcoded secrets, exposed ports, or critical misconfigurations were identified in the provided materials. The narrow support window (only 4.2.x) and incomplete POM configuration represent the main concerns.

  • Medium · Incomplete POM Configuration in docs/pom.xml — docs/pom.xml. The maven-resources-plugin configuration appears to be truncated in the provided POM file. The execution goal 'res' is incomplete, which could indicate a build configuration issue or potential for misconfiguration. Fix: Complete the maven-resources-plugin configuration by ensuring the goal is fully specified (e.g., 'resources' instead of 'res') and verify all plugin configurations are properly closed.
  • Low · Development Version in Snapshot — docs/pom.xml (parent version reference), SECURITY.md. The project is currently on version 4.2.39-SNAPSHOT, indicating active development. Version 5.0.x is marked as 'in development' in SECURITY.md with no support. Development versions may contain unpatched vulnerabilities. Fix: Ensure development versions are not used in production environments. Clearly communicate the status of unreleased versions to users and maintain regular security updates for supported versions.
  • Low · Limited Security Vulnerability Reporting Channels — SECURITY.md. The SECURITY.md policy directs vulnerabilities to a general mailing list and Twitter DMs. While adequate, there is no formal security.txt file or bug bounty program visible, and the communication channels could be more formalized. Fix: Consider implementing a formal security vulnerability disclosure program, adding a .well-known/security.txt file, and using GitHub Security Advisory features for coordinated vulnerability disclosure.
  • Low · Outdated Documentation Build Dependencies — docs/requirements.txt. The documentation build configuration references Sphinx and other tools via requirements.txt, which may contain outdated or vulnerable dependencies that could be exploited during the documentation build process. Fix: Regularly audit and update documentation build dependencies. Pin specific versions and perform security scans on documentation build toolchain dependencies using tools like pip-audit.
  • Low · Limited Version Support Window — SECURITY.md. Only the current stable version (4.2.x) is supported. This is a narrow support window that could leave users vulnerable if they cannot immediately upgrade. Fix: Consider extending security support to at least the previous major version (N-1 support model) to provide users with a reasonable migration period for critical security patches.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · dropwizard/metrics — RepoPilot