RepoPilotOpen in app →

ZiggyCreatures/FusionCache

FusionCache is an easy to use, fast and robust hybrid cache with advanced resiliency features.

Healthy

Healthy across all four use cases

Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 2d ago
  • 6 active contributors
  • MIT licensed
Show 3 more →
  • CI configured
  • Tests present
  • Single-maintainer risk — top contributor 94% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/ziggycreatures/fusioncache)](https://repopilot.app/r/ziggycreatures/fusioncache)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/ziggycreatures/fusioncache on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: ZiggyCreatures/FusionCache

Generated by RepoPilot · 2026-05-10 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/ZiggyCreatures/FusionCache shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 2d ago
  • 6 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Single-maintainer risk — top contributor 94% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live ZiggyCreatures/FusionCache repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/ZiggyCreatures/FusionCache.

What it runs against: a local clone of ZiggyCreatures/FusionCache — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in ZiggyCreatures/FusionCache | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 32 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>ZiggyCreatures/FusionCache</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of ZiggyCreatures/FusionCache. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/ZiggyCreatures/FusionCache.git
#   cd FusionCache
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of ZiggyCreatures/FusionCache and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "ZiggyCreatures/FusionCache(\\.git)?\\b" \\
  && ok "origin remote is ZiggyCreatures/FusionCache" \\
  || miss "origin remote is not ZiggyCreatures/FusionCache (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "README.md" \\
  && ok "README.md" \\
  || miss "missing critical file: README.md"
test -f "docs/CoreMethods.md" \\
  && ok "docs/CoreMethods.md" \\
  || miss "missing critical file: docs/CoreMethods.md"
test -f "docs/CacheLevels.md" \\
  && ok "docs/CacheLevels.md" \\
  || miss "missing critical file: docs/CacheLevels.md"
test -f "docs/Backplane.md" \\
  && ok "docs/Backplane.md" \\
  || miss "missing critical file: docs/Backplane.md"
test -f "docs/FailSafe.md" \\
  && ok "docs/FailSafe.md" \\
  || miss "missing critical file: docs/FailSafe.md"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 32 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~2d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/ZiggyCreatures/FusionCache"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

FusionCache is a C# hybrid cache library that transparently operates as either a single-level L1 memory cache or a multi-level L1+L2 cache where L2 integrates any IDistributedCache implementation (Redis, etc). It solves cache stampede, cold-start, and horizontal scalability problems through advanced features like fail-safe mechanisms, soft/hard timeouts, eager refresh, backplane sync, and comprehensive observability. Single-package NuGet library structure: core cache engine in src/ (inferred), extensive plugin/backplane architecture supporting Redis and other distributed caches via IDistributedCache, benchmark suite under benchmarks/ZiggyCreatures.FusionCache.Benchmarks/ with execution, happy-path, and comparison tests. Documentation is narrative-driven in docs/ with feature-specific guides rather than API reference organization.

👥Who it's for

C#/.NET backend engineers and architects building distributed systems who need production-grade caching with resiliency—especially those using ASP.NET Core, working with high-traffic systems, or requiring multi-node deployments with failover safety.

🌱Maturity & risk

Production-ready and actively maintained. FusionCache is battle-tested in real-world high-volume systems for years, used by Microsoft in Data API Builder, received a Google Open Source Award, and has comprehensive benchmarks in /benchmarks. The project has structured CI/CD (.github/workflows/ci.yml, codeql-analysis.yml) and extensive documentation across docs/ with 60+ files. This is a mature, actively developed library with significant ecosystem adoption.

Very low risk. The library is single-language (C# only, 1.8M lines), has a clear maintainer (ZiggyCreatures organization), comprehensive test coverage, CI automation, and documented backward compatibility. No obvious single-maintainer risk given Google's award and Microsoft adoption. The main risk is .NET ecosystem changes, but the project tracks Microsoft's HybridCache integration showing active adaptation.

Active areas of work

Active development visible through comprehensive docs on recent features: OpenTelemetry support, Microsoft HybridCache integration (docs/MicrosoftHybridCache.md), adaptive caching, background distributed operations, disk cache support, and tagging. The presence of 60+ detailed documentation files suggests ongoing feature expansion and refinement beyond the core hybrid cache.

🚀Get running

git clone https://github.com/ZiggyCreatures/FusionCache.git
cd FusionCache
dotnet build ZiggyCreatures.FusionCache.slnx
dotnet test

No special install or setup visible; standard .NET tooling (dotnet CLI) is sufficient.

Daily commands:

dotnet build ZiggyCreatures.FusionCache.slnx
dotnet run --project benchmarks/ZiggyCreatures.FusionCache.Benchmarks/ZiggyCreatures.FusionCache.Benchmarks.csproj

No dev server; this is a library. Run benchmarks to see it in action, or reference in your own C# projects via NuGet.

🗺️Map of the codebase

  • README.md — Entry point documenting FusionCache's core value proposition: hybrid L1/L2 caching with resiliency features, backplane, and IDistributedCache integration.
  • docs/CoreMethods.md — Essential reference for the primary API surface (Get, Set, Remove, etc.) that all contributors must understand for consistency.
  • docs/CacheLevels.md — Architectural foundation explaining the dual-level cache design (L1 memory + L2 distributed) that underpins all features.
  • docs/Backplane.md — Critical feature documentation for distributed cache coherence mechanisms that prevent stale data across instances.
  • docs/FailSafe.md — Core resiliency pattern allowing stale data fallback when factory fails, a defining characteristic of FusionCache.
  • .github/workflows/ci.yml — Build and test automation pipeline that validates all PRs and releases.
  • Directory.Build.props — Solution-wide configuration managing NuGet versioning, target frameworks, and assembly properties.

🛠️How to make changes

Add a Custom Serializer Plugin

  1. Study the serializer interface by reviewing docs/Plugins.md and examining existing implementations for IFusionCacheSerializer contract (docs/Plugins.md)
  2. Create a new class implementing IFusionCacheSerializer with Serialize and Deserialize methods (benchmarks/ZiggyCreatures.FusionCache.Benchmarks/SerializersBenchmark.cs)
  3. Register the serializer in dependency injection using AddFusionCache().WithSerializer() or in FusionCacheOptions (docs/DependencyInjection.md)
  4. Add benchmark tests following the SerializersBenchmark.cs pattern to validate performance characteristics (benchmarks/ZiggyCreatures.FusionCache.Benchmarks/SerializersBenchmark.cs)

Add a New Cache Feature or Pattern

  1. Write comprehensive markdown documentation under docs/ following naming convention (e.g., docs/MyFeature.md) with diagrams and examples (docs/FactoryOptimization.md)
  2. Add corresponding benchmark in benchmarks/ZiggyCreatures.FusionCache.Benchmarks/ to measure overhead and validate efficacy (benchmarks/ZiggyCreatures.FusionCache.Benchmarks/LockerComparisonBenchmark.cs)
  3. Document integration with dependency injection in docs/DependencyInjection.md if applicable (docs/DependencyInjection.md)
  4. Update docs/README.md index to link to new documentation and increment version in Directory.Build.props if releasing (docs/README.md)

Add a Distributed Cache Backend Integration

  1. Review existing backend patterns in docs/RedisNotes.md and docs/DiskCache.md to understand IDistributedCache implementation requirements (docs/RedisNotes.md)
  2. Implement IDistributedCache from Microsoft.Extensions.Caching.Distributed with GetAsync, SetAsync, and RemoveAsync methods (docs/DiskCache.md)
  3. Write integration documentation explaining setup, connection configuration, and performance characteristics (docs/Comparison.md)
  4. Create benchmarks comparing your backend to Redis and in-memory options using the framework in benchmarks/ (benchmarks/ZiggyCreatures.FusionCache.Benchmarks/Program.cs)

Implement Observability & Monitoring

  1. Reference OpenTelemetry integration pattern in docs/OpenTelemetry.md and docs/Logging.md for instrumentation points (docs/OpenTelemetry.md)
  2. Add event hooks and metrics emission following the Events pattern documented in docs/Events.md (docs/Events.md)
  3. Validate metrics are correctly emitted under cache hit/miss/refresh scenarios using logging benchmarks (benchmarks/ZiggyCreatures.FusionCache.Benchmarks/LoggingBenchmark.cs)

🔧Why these technologies

  • Hybrid L1/L2 Cache Architecture — Combines in-process memory cache speed with distributed persistence for scalability; provides cold start optimization and cross-instance coherence without forcing all traffic through Redis
  • IDistributedCache Interface — Standard Microsoft.Extensions.Caching abstraction enables pluggable backends (Redis, MemoryCache, custom disks); avoids vendor lock-in and allows teams to choose their infrastructure
  • Backplane Pattern — Solves cache coherence in distributed systems by broadcasting invalidation messages across instances, ensuring stale data is not served after updates
  • Fail-Safe with Stale Data — When factory (database/API) fails, gracefully serves cached stale data instead of errors; improves availability for read-heavy workloads where slightly outdated data is preferable to 500 errors
  • Locking & Thunder Herd Mitigation — Prevents cache stampede: when multiple threads miss cache simultaneously, only one executes the expensive factory while others wait for result; reduces backend load during recovery
  • Adaptive Caching & Eager Refresh — Dynamically adjusts TTLs based on failure rates and proactively refreshes before expiration; reduces tail latency and failure windows in unpredictable environments
  • OpenTelemetry + Logging — Structured observability enables production debugging without vendor lock-in; metrics and traces reveal cache hit rates, factory latencies, and failure patterns

⚖️Trade-offs already made

  • Hybrid L1+L2 over pure distributed cache

    • Why: L1 provides sub-millisecond latency locally; L2 provides consistency and scalability
    • Consequence: Requires backplane to maintain coherence; slightly more operational complexity than single-layer caches; memory footprint per instance
  • Fail-safe via stale data fallback

    • Why: Maximizes availability when factory fails; degrades gracefully rather than erroring
    • Consequence: Requires careful TTL tuning so stale data age is acceptable; not suitable for strongly-consistent transactional scenarios (financial ledgers, auth tokens)
  • Factory timeout with fallback vs synchronous blocking

    • Why: Timeouts prevent cache-local hangs from backend failures; fallback keeps system responsive
    • Consequence: Requires factory implementations to be cancellable and

🪤Traps & gotchas

No explicit env vars or secret configuration visible. The main gotcha: IDistributedCache backend (Redis, etc.) must be configured separately; FusionCache doesn't bundle it—you inject an implementation. Backplane requires a separate pub/sub system (not included). The library uses soft/hard timeout semantics that differ from standard lock timeouts—read docs/Timeouts.md carefully when tuning. AutoClone feature for mutable objects can have performance implications; see docs/AutoClone.md.

🏗️Architecture

💡Concepts to learn

  • Cache Stampede (Thundering Herd) — FusionCache's core problem it solves; when many threads request the same expired cache key simultaneously, they all hit the backend. FusionCache uses locking to serialize factory calls and protect the backend. See docs/CacheStampede.md.
  • Fail-Safe Mechanism (Stale-Data Fallback) — Unique FusionCache feature: on factory/L2 failure, it returns stale cached data instead of throwing. Critical for resilient distributed systems. See docs/FailSafe.md.
  • Soft/Hard Timeout — FusionCache distinguishes between soft timeouts (use stale data if factory exceeds limit) and hard timeouts (abandon operation entirely). Understanding this is essential for tuning performance vs resilience trade-offs.
  • Backplane (Distributed Invalidation) — FusionCache can sync cache invalidations across multiple nodes in real-time via a pub/sub backplane. Solves cache coherency in distributed systems. See docs/Backplane.md.
  • Two-Level Caching (L1+L2 Hybrid Architecture) — L1 is fast in-memory cache per node; L2 is shared distributed cache (Redis). FusionCache transparently manages both, providing cold-start resilience and scalability. This is the core architectural pattern.
  • Adaptive Caching — FusionCache can adjust cache durations based on factory execution time or failure patterns. Useful for heterogeneous workloads with variable backend latency. See docs/AdaptiveCaching.md.
  • IDistributedCache Interface — FusionCache plugs into .NET's standard IDistributedCache contract, allowing swappable backends (Redis via StackExchange.Redis, SQL Server, etc.). Understanding this interface is essential for configuring L2.
  • microsoft/runtime — Contains the IDistributedCache interface and Microsoft's new HybridCache that FusionCache integrates with (see docs/MicrosoftHybridCache.md)
  • StackExchange/StackExchange.Redis — De facto Redis client for C#; the typical L2 backend used with FusionCache's IDistributedCache interface
  • dotnet/aspnetcore — Home of ASP.NET Core caching interfaces and dependency injection; FusionCache targets this ecosystem
  • open-telemetry/opentelemetry-dotnet — FusionCache implements OpenTelemetry observability (see docs/OpenTelemetry.md); this is the instrumentation library it integrates with
  • sebastienros/jint — Alternative hybrid caching approach if you need JavaScript evaluation; shows the broader landscape FusionCache operates in

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive integration tests for Backplane synchronization across distributed cache scenarios

The repo has extensive documentation on Backplane (docs/Backplane.md) and BackgroundDistributedOperations (docs/BackgroundDistributedOperations.md), but the test coverage for multi-instance cache coherence scenarios appears limited. New contributors could add integration tests validating cache invalidation, tag-based updates, and stale-while-revalidate behavior across multiple FusionCache instances connected via a backplane, which is critical for production hybrid cache deployments.

  • [ ] Review existing tests in the test project (not shown in file list) to understand current backplane test coverage
  • [ ] Create new test file(s) covering: multi-instance invalidation, backplane message ordering, tag-based distributed updates
  • [ ] Add tests for edge cases: network delays, out-of-order notifications, instance failures during distributed operations
  • [ ] Validate the scenarios illustrated in docs/images/cache-coherence-*.png are covered by tests
  • [ ] Ensure tests cover both synchronous and asynchronous backplane implementations (AsyncBackplane patterns)

Add OpenTelemetry instrumentation tests and validation suite

The repo documents OpenTelemetry support (docs/OpenTelemetry.md) but there's no visible test suite validating that metrics, traces, and logs are properly emitted. A new contributor could build a comprehensive validation suite that ensures all cache operations (hits, misses, factory executions, failures) correctly emit OpenTelemetry signals, which is essential for observability-critical production deployments.

  • [ ] Create a new test class validating OpenTelemetry.Api instrumentation points
  • [ ] Add tests for meter creation, counter increments (cache hits/misses/factory calls), and histogram recordings (operation latencies)
  • [ ] Add validation that all failure scenarios (FailSafe activations, timeouts, factory exceptions) emit appropriate traces and metrics
  • [ ] Test that ActivitySource correctly tracks distributed tracing context across async boundaries
  • [ ] Add integration tests with a mock OpenTelemetry collector to validate end-to-end signal flow

Add comprehensive unit tests for Adaptive Caching and Conditional Refresh logic

The repo features Adaptive Caching (docs/AdaptiveCaching.md) and Conditional Refresh (docs/ConditionalRefresh.md) as advanced resiliency features, but test coverage for the state machine transitions, TTL adjustments, and refresh condition evaluations appears to need expansion. New contributors could add detailed unit tests for edge cases in adaptive behavior, ensuring the cache intelligently adjusts to changing conditions and only refreshes when appropriate.

  • [ ] Create test cases for adaptive TTL increase/decrease logic under various hit/miss ratios
  • [ ] Add tests validating conditional refresh triggers (e.g., refresh only when hit ratio drops below threshold)
  • [ ] Test state transitions: initial mode → adaptive mode → recovery mode with proper timeout handling
  • [ ] Add edge case tests: rapid condition changes, zero-duration windows, concurrent adaptive updates
  • [ ] Validate that adaptive caching respects max/min TTL bounds and doesn't cause cache thrashing

🌿Good first issues

  • Add integration tests in /tests for the disk cache feature (docs/DiskCache.md) similar to patterns in benchmarks/SequentialComparisonBenchmark*.cs; currently documented but test coverage may be incomplete.
  • Expand docs/DependencyInjection.md with concrete examples for IServiceCollection extension methods (similar to how docs/PluginSample.md exists); dependency injection patterns are documented but lack runnable samples.
  • Add a new benchmark file benchmarks/ZiggyCreatures.FusionCache.Benchmarks/BackplaneLatencyBenchmark.cs measuring multi-node sync overhead; the backplane is a key feature but no dedicated benchmark exists alongside execution and happy-path benchmarks.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 99b5fe4 — Update CacheLevels.md (jodydonetti)
  • e9e974d — Fix speficy -> specify typo in CacheLevels docs (#616) (pikammmmm)
  • 4095ae1 — Packages update (jodydonetti)
  • bb0ad43 — Bump package ref to fix OTEL's lib CVE (jodydonetti)
  • 21857f4 — Always include the tag on traces when doing RemoveByTag(tag) (jodydonetti)
  • 5cc10f7 — When doing the L2->L1 copy, ignore the distributed entry if the memory entry is more fresh (jodydonetti)
  • 6a691e3 — Minor (jodydonetti)
  • 070af47 — Minor (jodydonetti)
  • ba03d95 — Update StepByStep.md (jodydonetti)
  • bf8062c — Update StepByStep.md (jodydonetti)

🔒Security observations

The FusionCache repository demonstrates good security practices overall. No critical vulnerabilities were identified in the provided file structure. The codebase appears to be well-organized with proper CI/CD workflows (including CodeQL analysis), no hardcoded credentials visible, and no obvious injection risks based on the naming conventions. However, minor improvements include: (1) adding a SECURITY.md policy file for responsible disclosure, (2) ensuring dependency lock files are committed for supply chain security, and (3) potentially enhancing security testing documentation in CI/CD workflows. As a caching library, the attack surface is inherently limited since it handles in-memory and distributed caching operations rather than user input parsing or database queries. The project's MIT license and active maintenance contribute positively to security posture.

  • Low · Missing Security Policy Documentation — Repository root. No SECURITY.md file found in the repository root. This file is important for responsible disclosure of security vulnerabilities and provides security contact information. Fix: Create a SECURITY.md file following GitHub's security policy template to define vulnerability disclosure procedures and contact information.
  • Low · Potential Dependency Supply Chain Risk — Project configuration files. No explicit dependency lock file (packages.lock.json) is visible in the provided file structure for a C# NuGet project. This could lead to non-deterministic builds if transitive dependencies are updated. Fix: Enable package lock files (RestoreUseStaticGraphEvaluation) in Directory.Build.props and commit packages.lock.json to ensure reproducible builds and easier security auditing of dependencies.
  • Low · Missing Security Headers in Documentation — .github/workflows. The GitHub repository appears to be missing explicit security guidelines. While benchmarks and tests are present, there's no visible security testing documentation. Fix: Document security testing practices and ensure CI/CD workflows include security scanning (SAST/DAST) as part of the build pipeline.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · ZiggyCreatures/FusionCache — RepoPilot