RepoPilotOpen in app →

splitrb/split

:chart_with_upwards_trend: The Rack Based A/B testing framework

Healthy

Healthy across all four use cases

Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 3mo ago
  • 12 active contributors
  • MIT licensed
Show 3 more →
  • CI configured
  • Tests present
  • Single-maintainer risk — top contributor 81% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/splitrb/split)](https://repopilot.app/r/splitrb/split)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/splitrb/split on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: splitrb/split

Generated by RepoPilot · 2026-05-10 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/splitrb/split shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 3mo ago
  • 12 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Single-maintainer risk — top contributor 81% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live splitrb/split repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/splitrb/split.

What it runs against: a local clone of splitrb/split — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in splitrb/split | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 113 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>splitrb/split</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of splitrb/split. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/splitrb/split.git
#   cd split
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of splitrb/split and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "splitrb/split(\\.git)?\\b" \\
  && ok "origin remote is splitrb/split" \\
  || miss "origin remote is not splitrb/split (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "lib/split.rb" \\
  && ok "lib/split.rb" \\
  || miss "missing critical file: lib/split.rb"
test -f "lib/split/experiment.rb" \\
  && ok "lib/split/experiment.rb" \\
  || miss "missing critical file: lib/split/experiment.rb"
test -f "lib/split/helper.rb" \\
  && ok "lib/split/helper.rb" \\
  || miss "missing critical file: lib/split/helper.rb"
test -f "lib/split/trial.rb" \\
  && ok "lib/split/trial.rb" \\
  || miss "missing critical file: lib/split/trial.rb"
test -f "lib/split/redis_interface.rb" \\
  && ok "lib/split/redis_interface.rb" \\
  || miss "missing critical file: lib/split/redis_interface.rb"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 113 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~83d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/splitrb/split"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Split is a Rack-based A/B testing framework that integrates with Rails, Sinatra, and other Ruby web applications to run split tests (experiments) backed by Redis. It provides the ab_test helper method to randomly assign users to experiment variants and track conversion via ab_finished, with a built-in web dashboard (in lib/split/dashboard/) for monitoring live experiment metrics. Modular Rack middleware architecture: core logic in lib/split/experiment.rb and lib/split/alternative.rb handles experiment state and variant assignment; lib/split/algorithms/ (block_randomization.rb, weighted_sample.rb, whiplash.rb) provides pluggable randomization strategies; lib/split/helper.rb provides Rails view/controller integration; lib/split/dashboard/ is a self-contained Sinatra app for metrics visualization; Redis persistence via lib/split/experiment_storage.rb.

👥Who it's for

Ruby/Rails developers building web applications who need to run A/B tests on features, UI elements, or user flows without external SaaS dependencies. Product teams and engineers managing experiments on Rack-based stacks (Rails, Sinatra, Hanami).

🌱Maturity & risk

Production-ready and actively maintained. The gem is v4.0+ with CI/CD via GitHub Actions (.github/workflows/ci.yml), comprehensive test coverage tracked by CodeClimate, and clear Ruby version support (2.5+, Rails 5.2+). The v4.0 release line indicates a stable, long-lived project used in production systems.

Low risk for production use. The project has clear governance (CODE_OF_CONDUCT.md, CONTRIBUTING.md), up-to-date dependency management (Dependabot integration), and a focused Redis-only dependency model. Single main risk: Redis is a hard requirement with strict version constraints (≥4.0), so infrastructure outages directly impact experiment assignment.

Active areas of work

No specific recent changes visible in file list, but the presence of .github/dependabot.yml and active CI workflows suggests ongoing dependency maintenance. The codebase is stable with no indication of major breaking changes in development.

🚀Get running

git clone https://github.com/splitrb/split.git
cd split
bundle install
redis-server  # must be running
bundle exec rspec

Daily commands: No standalone server. Split is a library mixed into a host Rails/Sinatra app. To test locally: bundle exec rspec runs the test suite (requires Redis). The dashboard mounts at a configurable route in the host app.

🗺️Map of the codebase

  • lib/split.rb — Main entry point that initializes the Split framework and loads all core modules; every contributor must understand the initialization flow here.
  • lib/split/experiment.rb — Core Experiment class representing A/B test logic, alternatives, and persistence; foundational to understanding how experiments work.
  • lib/split/helper.rb — Primary helper module that provides the ab_test method used in controller/view code; essential for understanding the public API.
  • lib/split/trial.rb — Trial class that manages user participation in a single experiment, including alternative assignment and goal tracking.
  • lib/split/redis_interface.rb — Abstraction layer for all Redis operations; critical for understanding data persistence and concurrent access patterns.
  • lib/split/persistence — Persistence layer interfaces (Redis, session, cookie, dual adapters); determines how user state is stored across requests.
  • lib/split/configuration.rb — Configuration module that sets up Split's runtime behavior, Redis connection, and adapter selection.

🛠️How to make changes

Add a New Randomization Algorithm

  1. Create a new algorithm class in lib/split/algorithms/ (e.g., my_algorithm.rb) that implements the required interface with a choose_alternative(experiment) method. (lib/split/algorithms/my_algorithm.rb)
  2. Register the algorithm in the algorithms registry by adding an entry in lib/split/algorithms.rb in the algorithm selection logic. (lib/split/algorithms.rb)
  3. Set the algorithm via configuration: Split.configuration.algorithm = :my_algorithm in config or during initialization. (lib/split/configuration.rb)
  4. Add test coverage in spec/algorithms/ directory to verify the algorithm correctly assigns alternatives. (spec/algorithms/my_algorithm_spec.rb)

Add a New Persistence Adapter

  1. Create a new adapter class in lib/split/persistence/ (e.g., database_adapter.rb) that implements the standard persistence interface: save(user, experiment, alternative), load(user, experiment), etc. (lib/split/persistence/database_adapter.rb)
  2. Register the adapter in Split configuration by updating lib/split/configuration.rb and the persistence initialization logic. (lib/split/configuration.rb)
  3. Configure which adapter to use: Split.configuration.persistence = Split::Persistence::DatabaseAdapter.new in your app initializer. (lib/split.rb)
  4. Add comprehensive test coverage in spec/persistence/database_adapter_spec.rb covering storage, retrieval, and edge cases. (spec/persistence/database_adapter_spec.rb)

Add a New Dashboard Feature or Control

  1. Update the dashboard ERB template in lib/split/dashboard/views/index.erb or add a new view file to display or control the new feature. (lib/split/dashboard/views/index.erb)
  2. Extend the dashboard helpers in lib/split/dashboard/helpers.rb to provide the necessary data or formatting for the new feature. (lib/split/dashboard/helpers.rb)
  3. Add client-side logic in lib/split/dashboard/public/dashboard.js to handle interactions (e.g., buttons, forms, updates). (lib/split/dashboard/public/dashboard.js)
  4. Add routes in lib/split/dashboard.rb (Sinatra app) to handle any new POST/PUT requests from the dashboard. (lib/split/dashboard.rb)
  5. Test the new feature with dashboard specs in spec/dashboard_spec.rb. (spec/dashboard_spec.rb)

Add Support for a New Experiment Goal Type or Custom Metric

  1. Create or extend lib/split/metric.rb to define the custom metric or goal type with its own tracking and aggregation logic. (lib/split/metric.rb)
  2. Update lib/split/experiment.rb to register and handle the new goal type in the experiment's goal management. (lib/split/experiment.rb)
  3. Extend lib/split/helper.rb or lib/split/trial.rb to provide a convenient public API for tracking the new goal (e.g., track_custom_metric method). (lib/split/helper.rb)
  4. Add test cases in spec/metric_spec.rb to verify the custom metric is calculated and persisted correctly. (spec/metric_spec.rb)

🔧Why these technologies

  • Redis — Primary persistence store for experiment state, user assignments, and goal metrics. Provides fast key-value access, atomic operations, and support for distributed systems.
  • Rack Middleware — Framework-agnostic integration pattern allows Split to work with any Rack-based app (Rails, Sinatra, etc.) without tight coupling.
  • Session/Cookie Adapters — Alternative persistence strategies for stateless deployments or when Redis unavailable; enables cookie-based fallback for user assignment.
  • Sinatra (Dashboard) — Lightweight web framework for the admin dashboard; minimal dependencies and easy Rack integration.

⚖️Trade-offs already made

  • Redis as mandatory primary store

    • Why: Simplifies concurrent access, atomic counters, and distributed experiment state without complex synchronization.
    • Consequence: Adds external dependency and operational overhead; not suitable for apps without Redis infrastructure.
  • undefined

    • Why: undefined
    • Consequence: undefined

🪤Traps & gotchas

Redis mandatory: Split has zero fallback if Redis is unavailable—experiments will raise errors. Session/cookie dependency: ab_test tracking requires either persistent session storage or permanent cookies; stateless API backends won't work. Participant ID binding: users are tracked by IP + user_id; multi-device users may see different variants. Experiment persistence: once you finish an experiment (mark all as complete), Redis keys linger—manual cleanup may be needed. Rack requirement: must run on a Rack-compatible server; incompatible with CGI or non-Rack frameworks.

🏗️Architecture

💡Concepts to learn

  • Rack Middleware — Split integrates via Rack to transparently intercept HTTP requests and track participant identity before they reach controller logic
  • Block Randomization — One of Split's built-in algorithms (lib/split/algorithms/block_randomization.rb) that ensures balanced variant distribution in groups; critical for valid experiment statistics
  • Weighted Sampling — Implemented in lib/split/algorithms/weighted_sample.rb to allow unequal variant allocations (e.g., 80/20 splits); necessary for rollout testing
  • Redis Key-Value Store Persistence — Split uses Redis as the sole backing store for experiment state, variant assignments, and conversion counts; understanding Redis expiry, atomicity, and failover is essential
  • Session Affinity / Sticky Sessions — For multi-server deployments, Split requires session state to be consistent across requests; misconfigurations cause users to flip variants mid-experiment
  • Cohort Assignment — Split's core mechanic: once a participant is assigned a variant, they must always see the same variant; achieved via hashing participant ID + experiment name in Redis
  • ryanb/abingo — Direct predecessor/inspiration to Split; similar Rails A/B testing gem that Split was designed to improve upon
  • assaf/vanity — Parallel Rails A/B testing framework with similar goals; excellent reference for alternative experiment design patterns
  • resque/resque — Mentioned in README as inspiration for Split's Redis usage pattern; foundational Rack-based job queue library
  • laserlemon/splitboard — Dashboard and analytics companion to Split; provides deeper metrics visualization for experiments
  • rails/rails — Primary integration target; Split mounts as a Rails engine and relies on Rails conventions

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive test coverage for persistence adapters

The repo has 4 persistence adapters (cookie, dual, redis, session) in lib/split/persistence/ but only redis_adapter appears to have dedicated specs. The persistence layer is critical for A/B test state management. Adding full test suites for cookie_adapter.rb, session_adapter.rb, and dual_adapter.rb would ensure reliability and catch edge cases in how user assignments are persisted.

  • [ ] Create spec/persistence/cookie_adapter_spec.rb with tests for cookie storage, expiration, and serialization
  • [ ] Create spec/persistence/session_adapter_spec.rb with tests for session state management across requests
  • [ ] Create spec/persistence/dual_adapter_spec.rb with tests for failover behavior between primary and secondary adapters
  • [ ] Ensure tests cover edge cases like missing cookies, expired sessions, and adapter switching

Add integration tests for combined_experiments_helper

lib/split/combined_experiments_helper.rb manages running multiple experiments simultaneously, which is complex logic. While spec/combined_experiments_helper_spec.rb exists, the file structure shows this is a critical helper but lacks corresponding integration tests that verify behavior across multiple experiments with different algorithms (weighted_sample, block_randomization, whiplash).

  • [ ] Create spec/integration/combined_experiments_integration_spec.rb
  • [ ] Add tests for running 2+ experiments concurrently with different algorithms (lib/split/algorithms/)
  • [ ] Test goal tracking consistency across combined experiments
  • [ ] Test user assignment consistency when same user participates in multiple experiments

Add missing specs for dashboard functionality and views

The dashboard is a significant feature with lib/split/dashboard/views/ containing multiple ERB templates and lib/split/dashboard/helpers.rb, but no corresponding spec/dashboard/ test directory exists. Dashboard features like experiment filtering (dashboard-filtering.js) and pagination (pagination_helpers.rb) lack test coverage.

  • [ ] Create spec/dashboard/helpers_spec.rb to test dashboard rendering helpers and experiment filtering logic
  • [ ] Create spec/dashboard/pagination_helpers_spec.rb to test paginator.rb and pagination behavior
  • [ ] Add tests for dashboard view rendering with various experiment states (running, completed, paused)
  • [ ] Test dashboard filtering functionality against dashboard-filtering.js implementation

🌿Good first issues

  • Write integration tests for the block_randomization algorithm (lib/split/algorithms/block_randomization.rb) against the dashboard—currently no test coverage visible for the randomization strategy validation in the web UI.
  • Add documentation examples to README for Sinatra setup with sessions and the ab_test hook (currently only Rails example is complete; the Sinatra section cuts off mid-example).
  • Implement participant reset/purge helpers in lib/split/experiment.rb with a public API method (e.g., experiment.purge_participants) so users can clean up old test data without direct Redis access.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 9477336 — Merge pull request #745 from snaka/fix/delegate-bundle-to-setup-ruby (andrehjr)
  • 615c37a — Fix RuboCop to inherit default Exclude patterns (snaka)
  • 4013b4a — chore: exclude vendor directory from RuboCop checks (snaka)
  • 17ad5b0 — ci: use bundler-cache in ruby/setup-ruby action (snaka)
  • 9387b12 — Merge pull request #743 from snaka/doc/cache-warning (andrehjr)
  • da10e43 — docs: add cache invalidation warning for Redis Caching (snaka)
  • cca970a — Merge pull request #742 from splitrb/refactor-experiment-load (andrehjr)
  • eb46487 — Add specs for ExperimentStorage (andrehjr)
  • c9b1c56 — Simplify load_alternatives from config (andrehjr)
  • ca2c402 — Avoid hset on the storage if the value did not change (andrehjr)

🔒Security observations

The Split A/B testing framework has a moderate security posture with several areas of concern. The primary risks involve potential XSS vulnerabilities in dashboard views, unvalidated session/cookie handling, and the exposure of a dashboard without explicit authentication. Additionally, outdated JavaScript dependencies (jQuery 1.11.1) present known security risks. The codebase lacks visible security hardening measures such as input validation, security headers,

  • High · Potential XSS Vulnerability in Dashboard Views — lib/split/dashboard/views/. The dashboard views (lib/split/dashboard/views/*.erb) render experiment data and user-provided content. Without explicit evidence of proper HTML escaping in ERB templates, there is a risk of Cross-Site Scripting (XSS) attacks if experiment names, alternative names, or goal names are not properly sanitized. Fix: Ensure all user-controlled data rendered in ERB templates uses <%= %> with proper escaping (default in modern Rails) or explicitly use the sanitize or h helpers. Review _experiment.erb, _experiment_with_goal_header.erb, and index.erb for proper output encoding.
  • Medium · Potential Session Fixation via Cookie Adapter — lib/split/persistence/cookie_adapter.rb. The persistence layer includes a cookie adapter (lib/split/persistence/cookie_adapter.rb) that stores user identifiers in cookies. If cookies are not properly configured with secure flags (HttpOnly, Secure, SameSite), they could be vulnerable to session fixation or theft attacks. Fix: Ensure cookie configuration includes HttpOnly=true, Secure=true (in production), and SameSite=Strict/Lax flags. Review cookie expiration policies and implement proper cookie rotation.
  • Medium · Redis Connection Security Not Validated — lib/split/redis_interface.rb, lib/split/persistence/redis_adapter.rb. The Redis adapter (lib/split/persistence/redis_adapter.rb) and redis_interface.rb handle connections to Redis. Without visible evidence of SSL/TLS configuration, credential validation, or connection security in the static structure, there may be risks if Redis runs over unencrypted connections. Fix: Enforce Redis connections over SSL/TLS. Validate and test that Redis authentication is enforced. Use environment variables for credentials and never hardcode Redis URLs.
  • Medium · Dashboard Exposure Without Authentication — lib/split/dashboard.rb, lib/split/dashboard/views/. The dashboard (lib/split/dashboard.rb) provides a web interface for viewing experiments. The file structure suggests it may be exposed without explicit authentication mechanisms mentioned in the codebase. Fix: Implement authentication and authorization controls for the dashboard. Restrict access to authorized administrators only. Consider using middleware to require authentication before serving dashboard endpoints.
  • Low · Deprecated jQuery Library in Dashboard — lib/split/dashboard/public/jquery-1.11.1.min.js. The dashboard includes jquery-1.11.1.min.js (lib/split/dashboard/public/jquery-1.11.1.min.js), which is significantly outdated (released in 2014). Modern versions have critical security patches. Fix: Update jQuery to the latest stable version (3.x or newer). Review all JavaScript dependencies for outdated versions and apply security patches.
  • Low · Missing Security Headers in Dashboard Views — lib/split/dashboard/views/layout.erb. The dashboard layout (lib/split/dashboard/views/layout.erb) does not show explicit Content-Security-Policy, X-Frame-Options, or X-Content-Type-Options headers that are considered best practices for web applications. Fix: Add security headers through middleware or the framework configuration: Content-Security-Policy, X-Frame-Options: DENY, X-Content-Type-Options: nosniff, and Strict-Transport-Security.
  • Low · No Visible Input Validation for Experiment Configuration — lib/split/experiment.rb, lib/split/configuration.rb. The experiment and configuration modules (lib/split/experiment.rb, lib/split/configuration.rb) lack visible input validation patterns for experiment names, alternative names, and user-provided configurations. Fix: Implement comprehensive input validation for all user-provided values. Use allowlists for experiment identifiers. Validate and sanitize all configuration parameters.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · splitrb/split — RepoPilot