RepoPilotOpen in app →

nathanmarz/storm

Distributed and fault-tolerant realtime computation: stream processing, continuous computation, distributed RPC, and more

Mixed

Stale — last commit 9y ago

weakest axis
Use as dependencyMixed

last commit was 9y ago; no CI workflows detected

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isMixed

last commit was 9y ago; no CI workflows detected

  • 14 active contributors
  • Distributed ownership (top contributor 39% of recent commits)
  • Apache-2.0 licensed
Show all 6 evidence items →
  • Tests present
  • Stale — last commit 9y ago
  • No CI workflows detected
What would change the summary?
  • Use as dependency MixedHealthy if: 1 commit in the last 365 days
  • Deploy as-is MixedHealthy if: 1 commit in the last 180 days

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Forkable" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Forkable
[![RepoPilot: Forkable](https://repopilot.app/api/badge/nathanmarz/storm?axis=fork)](https://repopilot.app/r/nathanmarz/storm)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/nathanmarz/storm on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: nathanmarz/storm

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/nathanmarz/storm shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

WAIT — Stale — last commit 9y ago

  • 14 active contributors
  • Distributed ownership (top contributor 39% of recent commits)
  • Apache-2.0 licensed
  • Tests present
  • ⚠ Stale — last commit 9y ago
  • ⚠ No CI workflows detected

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live nathanmarz/storm repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/nathanmarz/storm.

What it runs against: a local clone of nathanmarz/storm — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in nathanmarz/storm | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch moved-to-apache exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 3217 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>nathanmarz/storm</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of nathanmarz/storm. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/nathanmarz/storm.git
#   cd storm
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of nathanmarz/storm and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "nathanmarz/storm(\\.git)?\\b" \\
  && ok "origin remote is nathanmarz/storm" \\
  || miss "origin remote is not nathanmarz/storm (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify moved-to-apache >/dev/null 2>&1 \\
  && ok "default branch moved-to-apache exists" \\
  || miss "default branch moved-to-apache no longer exists"

# 4. Critical files exist
test -f "storm-core/src/clj/backtype/storm/daemon/nimbus.clj" \\
  && ok "storm-core/src/clj/backtype/storm/daemon/nimbus.clj" \\
  || miss "missing critical file: storm-core/src/clj/backtype/storm/daemon/nimbus.clj"
test -f "storm-core/src/clj/backtype/storm/daemon/supervisor.clj" \\
  && ok "storm-core/src/clj/backtype/storm/daemon/supervisor.clj" \\
  || miss "missing critical file: storm-core/src/clj/backtype/storm/daemon/supervisor.clj"
test -f "storm-core/src/clj/backtype/storm/daemon/worker.clj" \\
  && ok "storm-core/src/clj/backtype/storm/daemon/worker.clj" \\
  || miss "missing critical file: storm-core/src/clj/backtype/storm/daemon/worker.clj"
test -f "storm-core/src/jvm/backtype/storm/Config.java" \\
  && ok "storm-core/src/jvm/backtype/storm/Config.java" \\
  || miss "missing critical file: storm-core/src/jvm/backtype/storm/Config.java"
test -f "storm-core/src/clj/backtype/storm/daemon/executor.clj" \\
  && ok "storm-core/src/clj/backtype/storm/daemon/executor.clj" \\
  || miss "missing critical file: storm-core/src/clj/backtype/storm/daemon/executor.clj"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 3217 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~3187d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/nathanmarz/storm"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Apache Storm is a distributed stream processing engine for real-time computation of unbounded data streams. It provides fault-tolerant, scalable processing via topologies (DAGs of spouts and bolts), built-in distributed RPC, and state management through its Acker daemon that guarantees at-least-once tuple processing semantics. Monorepo structure: storm-core/ contains the main engine with src/clj/backtype/storm/daemon/ holding critical daemons (nimbus.clj for cluster manager, supervisor.clj for node manager, executor.clj for task execution, acker.clj for reliability). Commands live in src/clj/backtype/storm/command/. Configuration defaults in conf/defaults.yaml, logging via logback/cluster.xml and storm-console-logging/. Build via project.clj (Leiningen).

👥Who it's for

Data engineers and architects building real-time analytics pipelines, fraud detection systems, and event processing applications who need horizontal scalability and fault tolerance without writing custom distributed systems code.

🌱Maturity & risk

Production-ready and actively maintained, but this fork (nathanmarz/storm) is historical—the official project moved to Apache in 2013. This repo contains 2.3MB of Java, extensive Clojure daemons (nimbus, supervisor, executor, acker), and a full CLI toolchain. The current active development happens at apache/incubator-storm; contributions should go there.

This specific fork is no longer maintained—all active development moved to Apache. Building against this codebase means missing security patches, bug fixes, and new features. The architecture depends on ZooKeeper for coordination and ZMQ for messaging (see bin/install_zmq.sh), introducing external service dependencies. Code is 10+ years old; newer distributed stream platforms (Kafka Streams, Flink) have displaced Storm in many use cases.

Active areas of work

This nathanmarz/storm fork is archived history. Active development occurs in apache/incubator-storm. The README explicitly directs contributors to migrate to Apache: submit PRs to apache/incubator-storm/pulls or patches to Apache JIRA (issues.apache.org/jira/browse/STORM). No new work should target this repository.

🚀Get running

Clone the Apache fork (not this one): git clone https://github.com/apache/incubator-storm.git && cd incubator-storm. For this historical repo: git clone https://github.com/nathanmarz/storm.git && cd storm && lein install (requires Leiningen). Then ./bin/storm dev-zookeeper to start embedded ZooKeeper, and ./bin/storm nimbus to start the master node in another terminal.

Daily commands: Start cluster: ./bin/storm dev-zookeeper (embedded ZK in dev), then ./bin/storm nimbus, ./bin/storm supervisor, ./bin/storm ui (web dashboard on localhost:8080). Submit topology: ./bin/storm jar topology.jar com.example.TopologyClass. Monitor via UI or ./bin/storm list.

🗺️Map of the codebase

  • storm-core/src/clj/backtype/storm/daemon/nimbus.clj — Nimbus daemon—the master scheduler and coordinator for all Storm topologies; understanding job submission, scheduling, and fault tolerance starts here
  • storm-core/src/clj/backtype/storm/daemon/supervisor.clj — Supervisor daemon that manages worker processes on individual nodes; critical for understanding task assignment and heartbeat/failure detection
  • storm-core/src/clj/backtype/storm/daemon/worker.clj — Worker process implementation handling tuple routing, messaging, and task lifecycle; the runtime engine for executing bolts and spouts
  • storm-core/src/jvm/backtype/storm/Config.java — Central configuration constants and validation; all topology config keys and cluster settings are defined here
  • storm-core/src/clj/backtype/storm/daemon/executor.clj — Executor abstraction for running bolt/spout task instances; handles parallelism, threading, and tuple emission/acknowledgment
  • storm-core/src/clj/backtype/storm/tuple.clj — Core tuple data structure and routing logic; defines how data flows between spouts, bolts, and the acking mechanism
  • storm-core/src/clj/backtype/storm/zookeeper.clj — ZooKeeper client abstractions for distributed state management, worker registration, and cluster coordination

🛠️How to make changes

Create a Custom Bolt

  1. Extend backtype.storm.topology.IRichBolt interface; implement prepare(), execute(), declareOutputFields(), and cleanup() (storm-core/src/jvm/backtype/storm/topology/IRichBolt.java (not in file list—reference the Java topology API))
  2. In execute(), call collector.emit(streamId, tuple) to send output tuples, anchored to input for acking (storm-core/src/clj/backtype/storm/daemon/task.clj (line ~output collector binding))
  3. Register the bolt in your topology using TopologyBuilder.setBolt(componentId, bolt, parallelism) (storm-core/src/clj/backtype/storm/testing.clj (see complete-topology examples))
  4. Test locally using LocalCluster and testing/with-simulated-time-local-cluster (storm-core/src/clj/backtype/storm/LocalCluster.clj)

Create a Custom Spout

  1. Extend backtype.storm.topology.IRichSpout; implement open(), nextTuple(), ack(), fail(), and declareOutputFields() (storm-core/src/jvm/backtype/storm/topology/IRichSpout.java (not in file list—reference the Java topology API))
  2. In nextTuple(), call collector.emit(values, msgId) to emit tuples with unique message IDs for tracking (storm-core/src/clj/backtype/storm/daemon/executor.clj (spout-execute-fn))
  3. Implement ack() for successful tuple processing and fail() to replay on failure (storm-core/src/clj/backtype/storm/daemon/common.clj (acker integration))
  4. Register in topology and test with LocalCluster; verify acking behavior with complete-topology (storm-core/src/clj/backtype/storm/testing.clj)

Implement a Custom Scheduler

  1. Extend backtype.storm.scheduler.IScheduler; implement prepare(Map conf) and schedule(Topologies topologies, Cluster cluster) (storm-core/src/jvm/backtype/storm/scheduler/IScheduler.java (not in file list))
  2. Use cluster.getAvailableSlots() and cluster.getUsedSlots() to query current slot allocation (storm-core/src/clj/backtype/storm/daemon/nimbus.clj (scheduler invocation context))
  3. Call cluster.assign(topology-id, new-assignment) to place executor tasks on nodes and ports (storm-core/src/clj/backtype/storm/scheduler/DefaultScheduler.clj (example: round-robin assignment))
  4. Set topology.scheduler in storm.yaml and restart Nimbus to activate (conf/defaults.yaml (topology.scheduler key))

Add a New Cluster Command

  1. Create a new file in storm-core/src/clj/backtype/storm/command/ following naming convention (e.g., my-command.clj) (storm-core/src/clj/backtype/storm/command/activate.clj (example: activate topology))
  2. Implement a -main function that parses args and calls thrift-client-call to invoke Nimbus RPC (storm-core/src/clj/backtype/storm/command/kill_topology.clj (example with error handling))
  3. Register the command in bin/storm shell script with entry for your command name (bin/storm (shell script dispatch))
  4. Test by running 'bin/storm my-command' after build; verify RPC method exists in Nimbus service definition (storm-core/src/clj/backtype/storm/thrift.clj (thrift interface))

🪤Traps & gotchas

ZooKeeper must be running before Nimbus/Supervisors start (dev-zookeeper helper provided, but production needs external ZK). ZMQ library requires C/C++ build toolchain (see bin/install_zmq.sh—fails silently on some systems). Topology code must be serializable (Kryo by default) and idempotent for at-least-once guarantees. Worker-to-worker tuple delivery is not guaranteed in-order within a stream (only within a task). Config changes require topology redeployment; hot-reload is not supported. Clojure/Java version mismatches break daemon startup.

🏗️Architecture

💡Concepts to learn

  • Acker-based tuple tracking — Storm's at-least-once guarantee relies on the Acker daemon XORing tuple IDs to detect dropped or failed tuples; understanding this is essential for building reliable topologies without data loss
  • Topology DAG (directed acyclic graph) — Storm processes streams as topologies—spouts emit tuples, bolts transform them, and edges define routing; DAG structure determines parallelism and fault boundaries
  • Tuple anchoring and stream grouping — Reliability depends on anchoring emitted tuples to input tuples (tracking lineage), and grouping strategies (shuffle, fields, all) determine how tuples are routed to downstream tasks
  • ZooKeeper coordination protocol — Nimbus and Supervisors use ZooKeeper for distributed state (task assignments, topology metadata, heartbeats); reading the /storm/ znode tree is critical for debugging cluster issues
  • Backpressure and flow control — Storm queues tuples in memory between tasks; if a bolt is slow, upstream spouts must throttle input to avoid memory overflow—this is enforced via transfer queue metrics
  • Kryo serialization framework — Storm serializes tuples and topology objects via Kryo by default; custom serializers are required for non-Serializable objects, and serialization overhead directly impacts throughput
  • Transactional topologies (Trident) — Higher-level abstraction over core Storm offering exactly-once semantics via micro-batches; critical for stateful operations like aggregations and joins without deduplication logic
  • apache/incubator-storm — Official active repository—this nathanmarz/storm fork is deprecated; all ongoing development and production releases happen here
  • apache/kafka — Complementary stream platform often paired with Storm for durability and replay; Kafka Spout is a common data source in Storm topologies
  • apache/flink — Modern successor to Storm with better state management, event-time semantics, and lower latency; many Storm users have migrated to Flink
  • nathanmarz/cascalog — Data processing DSL by Storm's creator that compiled to Storm topologies; demonstrates higher-level abstraction over raw bolts/spouts
  • spotify/luigi — Task dependency framework commonly used alongside Storm for orchestrating batch + stream hybrid pipelines

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive unit tests for storm-core/src/clj/backtype/storm/scheduler/

The scheduler directory contains three critical scheduler implementations (DefaultScheduler, EvenScheduler, IsolationScheduler) that determine how tasks are distributed across supervisor nodes. These are core to Storm's reliability but there are no visible test files in the provided structure for these schedulers. Adding unit tests would catch regressions in task placement logic and ensure fairness/isolation guarantees.

  • [ ] Create storm-core/test/clj/backtype/storm/scheduler/test_default_scheduler.clj with tests for basic task assignment
  • [ ] Create storm-core/test/clj/backtype/storm/scheduler/test_even_scheduler.clj verifying even distribution across nodes
  • [ ] Create storm-core/test/clj/backtype/storm/scheduler/test_isolation_scheduler.clj testing topology isolation constraints
  • [ ] Add integration tests verifying schedulers respect supervisor capacity and node constraints
  • [ ] Add tests verifying scheduler behavior under node failures and rebalancing scenarios

Add integration tests for messaging layer (ZMQ and local implementations)

The storm-core/src/clj/backtype/storm/messaging/ directory contains critical zmq.clj and local.clj implementations for inter-worker communication. Messaging failures directly impact topology reliability. The testing.clj file exists but lacks specific messaging transport tests. Add comprehensive tests for message delivery guarantees, ordering, and failover scenarios.

  • [ ] Create storm-core/test/clj/backtype/storm/messaging/test_zmq_messaging.clj testing ZMQ transport reliability
  • [ ] Add tests for message ordering guarantees across topology
  • [ ] Add failure scenario tests (network partition, dropped messages, slow consumers)
  • [ ] Create storm-core/test/clj/backtype/storm/messaging/test_local_messaging.clj for in-process messaging
  • [ ] Add performance benchmarks for both messaging implementations to catch regressions

Add missing documentation and examples for DRPC (Distributed RPC) configuration

The codebase contains storm-core/src/clj/backtype/storm/daemon/drpc.clj but conf/storm.yaml.example lacks DRPC-specific configuration examples. DRPC is a complex feature that requires proper setup. Adding concrete configuration examples and troubleshooting guide would reduce user confusion and support burden.

  • [ ] Expand conf/storm.yaml.example with DRPC configuration section including drpc.port, drpc.servers, drpc.worker.threads, and drpc.queue.size with comments explaining each setting
  • [ ] Create a new DRPC_SETUP.md guide documenting DRPC architecture, configuration steps, and common pitfalls
  • [ ] Add code examples showing how to invoke DRPC functions from clients with proper timeout/error handling
  • [ ] Document DRPC metrics and monitoring best practices in DRPC_SETUP.md
  • [ ] Add troubleshooting section covering common DRPC issues (timeout, connection refused, slow responses)

🌿Good first issues

  • Add integration tests for the DRPC (Distributed RPC) daemon in storm-core/src/clj/backtype/storm/daemon/drpc.clj—currently only command-level tests exist; test async RPC request/response with tuple ordering guarantees
  • Write a Python example topology demonstrating the Python API (storm-core/multilang) for word count over Kafka—Storm has Java/Clojure examples but minimal Python docs despite the multilang support in bin/storm
  • Document the tuple anchoring and ack() semantics in a code example under storm-core/src/clj/backtype/storm/daemon/task.clj—the acker protocol is critical but poorly explained in inline comments

Top contributors

Click to expand

📝Recent commits

Click to expand
  • cdb116e — Add move to Apache notice. (ptgoetz)
  • 7e40f9f — Merge pull request #767 from ptgoetz/apache-headers (ptgoetz)
  • 51f8a54 — Merge pull request #768 from ptgoetz/dependency-versions (ptgoetz)
  • 639201c — update README to reflect ASF license. (ptgoetz)
  • 080cec2 — update LICENSE/NOTICE to assume source-only distribution (ptgoetz)
  • 70d4323 — Merge branch 'master' into dependency-versions (ptgoetz)
  • 1babce4 — bump version for move to Apache incubator (ptgoetz)
  • f2d955b — update carbonite, tools.cli, and jgrapht dependencies to released versions (ptgoetz)
  • 009f924 — enforce UTF-8 encoding for python and ruby sources (ptgoetz)
  • a0987a5 — first pass at Apache LICENSE and NOTICE files (ptgoetz)

🔒Security observations

  • High · Potential Unsafe Deserialization in Distributed System — storm-core/src/clj/backtype/storm/messaging/. Storm is a distributed computing framework that processes data across multiple nodes. The presence of messaging components (zmq.clj, local.clj) and daemon processes suggests serialization/deserialization of untrusted data. Without explicit input validation shown in the file structure, there's risk of unsafe deserialization attacks when processing topology configurations or inter-node messages. Fix: Implement strict input validation for all deserialized objects. Use allowlists for acceptable classes. Consider using safer serialization formats (e.g., Protocol Buffers) instead of Java serialization where possible.
  • High · Missing Authentication in Distributed RPC (DRPC) — storm-core/src/clj/backtype/storm/daemon/drpc.clj. The codebase includes a DRPC (Distributed RPC) daemon (drpc.clj) which listens for remote procedure calls. The file structure does not reveal explicit authentication mechanisms for DRPC endpoint access, potentially allowing unauthorized users to submit arbitrary computations to the cluster. Fix: Implement strong authentication for DRPC endpoints. Require API keys, mutual TLS, or other credential-based authentication. Validate and authorize all incoming RPC requests before execution.
  • High · Plaintext Communication Over ZMQ — storm-core/src/clj/backtype/storm/messaging/zmq.clj. ZMQ messaging is used for inter-node communication (zmq.clj). Without explicit evidence of TLS/encryption configuration in the file structure, there is risk that sensitive data (topology definitions, tuples, intermediate results) could be intercepted in transit between Storm nodes. Fix: Enable ZMQ encryption using CurveZMQ or similar mechanisms. Enforce TLS/SSL for all inter-node communication. Verify that configuration defaults require encrypted channels.
  • High · Potential Command Injection in Shell Commands — storm-core/src/clj/backtype/storm/command/shell_submission.clj. The presence of shell_submission.clj and external command execution (bin/storm script) without visible sanitization suggests potential command injection vulnerabilities when processing user-supplied topology names, parameters, or configuration values. Fix: Use parameterized/escaped command execution. Avoid shell=True in subprocess calls. Validate and sanitize all user inputs before passing to shell commands. Use allowlists for acceptable characters in topology names and parameters.
  • Medium · Exposed JAAS Configuration File — conf/jaas_digest.conf. A JAAS configuration file exists (conf/jaas_digest.conf) which typically contains security realm definitions. If this file is not properly protected or contains insecure digest algorithms, authentication could be compromised. Fix: Use strong digest algorithms (SHA-256 or better) in JAAS configuration. Ensure this file is not world-readable. Consider using Kerberos or LDAP instead of digest-based authentication for production environments.
  • Medium · Default Configuration File in Repository — conf/storm.yaml.example, conf/defaults.yaml. storm.yaml.example and defaults.yaml are committed to the repository. These example configurations may contain insecure default values (e.g., default ports, disabled security features) that developers might copy directly to production. Fix: Review all default configuration values for security implications. Document security-critical settings explicitly. Provide secure-by-default configuration templates separate from examples.
  • Medium · No Visible Input Validation in UI Components — storm-core/src/clj/backtype/storm/ui/. UI components exist (storm/ui/core.clj, storm/ui/helpers.clj) for web-based cluster management. Without explicit validation visible in file names, there's potential for XSS or CSRF attacks when rendering user-supplied data in the web interface. Fix: Implement comprehensive input validation and output encoding in all UI components. Use templating engines with auto-escaping. Implement CSRF tokens for state-changing operations. Use Content Security Policy headers.
  • Medium · Unencrypted ZooKeeper Communication — undefined. Storm uses ZooKeeper (zookeeper.clj) for distributed coordination and state management. Without explicit TLS configuration visible in the file structure Fix: undefined

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Mixed signals · nathanmarz/storm — RepoPilot