RepoPilotOpen in app →

patrickmn/go-cache

An in-memory key:value store/cache (similar to Memcached) library for Go, suitable for single-machine applications.

Mixed

Stale — last commit 2y ago

weakest axis
Use as dependencyConcerns

last commit was 2y ago; top contributor handles 95% of recent commits…

Fork & modifyMixed

no tests detected; no CI workflows detected…

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isMixed

last commit was 2y ago; no CI workflows detected

  • 6 active contributors
  • MIT licensed
  • Stale — last commit 2y ago
Show all 6 evidence items →
  • Single-maintainer risk — top contributor 95% of recent commits
  • No CI workflows detected
  • No test directory detected
What would change the summary?
  • Use as dependency ConcernsMixed if: 1 commit in the last 365 days
  • Fork & modify MixedHealthy if: add a test suite
  • Deploy as-is MixedHealthy if: 1 commit in the last 180 days

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Great to learn from" badge

Paste into your README — live-updates from the latest cached analysis.

RepoPilot: Great to learn from
[![RepoPilot: Great to learn from](https://repopilot.app/api/badge/patrickmn/go-cache?axis=learn)](https://repopilot.app/r/patrickmn/go-cache)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/patrickmn/go-cache on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: patrickmn/go-cache

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/patrickmn/go-cache shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

WAIT — Stale — last commit 2y ago

  • 6 active contributors
  • MIT licensed
  • ⚠ Stale — last commit 2y ago
  • ⚠ Single-maintainer risk — top contributor 95% of recent commits
  • ⚠ No CI workflows detected
  • ⚠ No test directory detected

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live patrickmn/go-cache repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/patrickmn/go-cache.

What it runs against: a local clone of patrickmn/go-cache — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in patrickmn/go-cache | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 929 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>patrickmn/go-cache</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of patrickmn/go-cache. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/patrickmn/go-cache.git
#   cd go-cache
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of patrickmn/go-cache and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "patrickmn/go-cache(\\.git)?\\b" \\
  && ok "origin remote is patrickmn/go-cache" \\
  || miss "origin remote is not patrickmn/go-cache (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "cache.go" \\
  && ok "cache.go" \\
  || miss "missing critical file: cache.go"
test -f "sharded.go" \\
  && ok "sharded.go" \\
  || miss "missing critical file: sharded.go"
test -f "cache_test.go" \\
  && ok "cache_test.go" \\
  || miss "missing critical file: cache_test.go"
test -f "sharded_test.go" \\
  && ok "sharded_test.go" \\
  || miss "missing critical file: sharded_test.go"
test -f "README.md" \\
  && ok "README.md" \\
  || miss "missing critical file: README.md"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 929 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~899d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/patrickmn/go-cache"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

go-cache is a thread-safe, in-memory key:value store written in pure Go that mimics Memcached behavior but runs as a library within a single process. It stores arbitrary Go objects with configurable expiration times and automatically purges expired entries, eliminating the need for network serialization or external cache infrastructure. Simple two-file architecture: cache.go contains the core Cache type with Set/Get/Delete/Flush methods and expiration logic; sharded.go wraps Cache with per-shard mutexes to reduce lock contention under high concurrency. Both files export public structs and use unexported helper methods for internal state management.

👥Who it's for

Go developers building single-machine applications (web services, CLIs, microservices) who need fast, temporary data storage without Redis/Memcached operational overhead. Typical users are backend engineers optimizing hot data paths where network latency to external caches is unacceptable.

🌱Maturity & risk

Highly mature and stable. The project has minimal code (cache.go and sharded.go are the core), comprehensive test coverage (cache_test.go, sharded_test.go), and clear API contracts published via godoc. However, git history is not visible in the provided data; judge based on the clean, battle-tested implementation style and lack of TODO comments.

Low risk. The repo has zero external Go dependencies (pure stdlib), a single-file design pattern that's easy to audit, and explicit acknowledgment of non-persistence (not a PersistentDatastore). Main risk: single maintainer (patrickmn), so community PRs may have delayed review. No breaking changes visible in the stable API surface.

Active areas of work

Unable to determine from file metadata alone (no git log, PR list, or issues visible). Assess liveness by checking the GitHub repo's commit timeline, open issues, and last release date at github.com/patrickmn/go-cache.

🚀Get running

git clone https://github.com/patrickmn/go-cache.git && cd go-cache && go test ./... # No external dependencies, no build step required.

Daily commands: No server to run. This is a library: import github.com/patrickmn/go-cache in your Go code and instantiate with cache.New(defaultExpiration, cleanupInterval). Run tests with: go test -v ./

🗺️Map of the codebase

  • cache.go — Core cache implementation with Get, Set, Delete, and expiration logic; every contributor must understand the thread-safe map and cleanup mechanisms.
  • sharded.go — Sharded cache variant for high-concurrency scenarios; essential for understanding performance optimization trade-offs in this library.
  • cache_test.go — Comprehensive unit tests covering expiration, concurrency, and edge cases; validates cache correctness and correctness-critical behavior.
  • sharded_test.go — Tests for sharded cache including goroutine safety; ensures the optimized variant maintains correctness under load.
  • README.md — Documents library purpose, API, and usage patterns; clarifies scope boundaries and design intentions for new contributors.

🧩Components & responsibilities

  • Cache struct (cache.go) (Go map, sync.RWMutex, time.Duration, interface{}) — Stores items in a map, enforces TTL expiration, and coordinates thread-safe access.
    • Failure mode: If mutex deadlocks or cleanup goroutine panics, concurrent access may hang or memory may leak expired items.
  • Janitor (cache.go) (time.Ticker, goroutine) — Background goroutine that periodically scans and deletes expired items.
    • Failure mode: If janitor stops, expired items accumulate in memory; eventual memory exhaustion if items are never manually deleted.
  • ShardedCache (sharded.go) (sync hash function, array of Cache pointers) — Wraps multiple Cache instances to distribute key load and reduce lock contention.
    • Failure mode: Hash distribution may be uneven for poor key distributions; individual shard failures cascade to ShardedCache.

🔀Data flow

  • ApplicationCache.Set() — Application writes key:value pairs with optional expiration duration.
  • Cache.Set()item map (RWMutex-protected) — Key and Item (value + expiration time) stored in concurrent-safe map.
  • ApplicationCache.Get() — Application reads values by key.
  • Cache.Get()item map — Retrieves item and checks if expiration time has passed.
  • Janitor goroutineitem map — Periodically scans all items and removes those with expiration time <= now.

🛠️How to make changes

Add a custom expiration-based feature

  1. Study the Item struct and janitorFunc in cache.go to understand how expiration times are stored and checked (cache.go)
  2. Add your new method to the Cache struct (e.g., SetWithCallback) following the pattern of Set and Delete (cache.go)
  3. Write unit tests covering normal operation, expiration edge cases, and goroutine safety (cache_test.go)

Optimize for higher concurrency

  1. Review the sharding strategy in sharded.go to understand how it distributes keys across cache instances (sharded.go)
  2. Add new sharded methods following the pattern of existing wrappers (e.g., OnEvicted, Flush) (sharded.go)
  3. Add concurrent stress tests to sharded_test.go to validate your changes under high goroutine count (sharded_test.go)

Add persistence or serialization support

  1. Review Items() method and NewFrom() constructor in cache.go to understand the current serialization interface (cache.go)
  2. Create helper functions that marshal/unmarshal the cache state using encoding/json or similar (cache.go)
  3. Write tests that verify round-trip serialization and expiration times are preserved (cache_test.go)

🔧Why these technologies

  • Go sync.RWMutex — Provides efficient read-heavy concurrent access with minimal lock contention for typical cache workloads.
  • Goroutines + time.Ticker — Enables background expiration cleanup without blocking the main cache operations.
  • map[string]interface{} — Simple, built-in data structure for arbitrary value types without serialization overhead.
  • Sharding (hash-based partitioning) — Reduces lock contention under high concurrency by distributing keys across multiple independent cache instances.

⚖️Trade-offs already made

  • Single-machine only (no distributed caching)

    • Why: Simplicity and zero network overhead for single-server applications.
    • Consequence: Cannot scale across multiple machines; suitable only for single-server deployments.
  • In-memory storage with optional file-based recovery

    • Why: Maximum performance and simplicity; data loss on restart is acceptable for cache use cases.
    • Consequence: Not suitable as persistent datastore; recovery via NewFrom() requires manual serialization logic.
  • Janitor goroutine for background cleanup

    • Why: Lazy deletion would defer cleanup cost; background cleanup keeps memory bounded.
    • Consequence: Small overhead of background goroutine; configurable cleanup interval allows tuning.
  • RWMutex over fine-grained locking

    • Why: Simplicity and correctness; good enough for typical workloads.
    • Consequence: Sharded variant available for high-concurrency cases where lock contention becomes bottleneck.

🚫Non-goals (don't propose these)

  • Distributed caching across multiple machines (use Memcached, Redis for that)
  • Persistent storage (cache data is lost on restart unless manually serialized)
  • Advanced eviction policies beyond TTL expiration (no LRU, LFU, or size limits)
  • Real-time synchronization or replication
  • Authentication or access control

📊Code metrics

  • Avg cyclomatic complexity: ~2.5 — Core cache logic is straightforward (map operations, mutex locking); sharding and janitor add moderate complexity. Expiration check is O(1) per item, janitor is O(n) per interval.
  • Largest file: cache.go (320 lines)
  • Estimated quality issues: ~3 — Minor issues: unbounded cleanup interval, no size limits, interface{} type assertions. No critical bugs detected; overall code is clean and well-tested.

⚠️Anti-patterns to avoid

  • Unbounded cleanup interval (Medium)cache.go - New() constructor: User can set cleanupInterval to 0 or negative, which disables background cleanup entirely; expired items accumulate until manual deletion.
  • interface{} type assertions required (Low)cache.go - Get() return value: Get() returns interface{}, forcing callers to perform type assertion; no type safety at compile time.
  • No maximum cache size (Medium)cache.go - Set() method: Cache can grow unbounded; no eviction policy or size limit to prevent OOM in pathological scenarios.

🔥Performance hotspots

  • cache.go - Janitor (cleanupInterval loop) (CPU / Algorithmic) — Full map scan on every cleanup interval; O(n) iteration over all items regardless of how many are expired. For large caches with low expiration churn, this wastes CPU.
  • cache.go - RWMutex in Cache struct (Synchronization / Lock contention) — Single RWMutex serializes all writes and exclusive reads; under very high write concurrency, lock contention becomes bottleneck. (Sharded variant mitigates this.)
  • sharded.go - Hash function for shard selection (Load distribution) — Hash function distributes keys; poor key distribution (e.g., sequential numeric keys) may cause uneven shard load.

🪤Traps & gotchas

Type assertion required for retrieval: cache.Get() returns interface{}, so callers must assert types (e.g., foo.(string)). Expiration is best-effort: items are only purged during the janitor cleanup interval, not on-access. File persistence via SaveToFile/LoadFile is not atomic; concurrent reads during save can yield inconsistent snapshots. Background cleanup goroutine is not explicitly stoppable (design choice for simplicity).

🏗️Architecture

💡Concepts to learn

  • Time-based expiration and TTL (Time-To-Live) — Core mechanism in go-cache: every item has an optional expiration time, and a background janitor goroutine periodically scans and deletes expired entries. Understanding this prevents data leaks and explains cleanup overhead.
  • Reader-Writer Mutex (RWMutex) — cache.go uses sync.RWMutex to allow multiple concurrent readers without blocking, only serializing on writes; sharded.go reduces contention further by partitioning the cache.
  • Type assertion and interface{} storage — All cache values are stored as interface{} (Go's universal type), requiring callers to assert concrete types; this trades type safety for flexibility and is a core design trade-off in the library.
  • Sharding for lock-free concurrency — sharded.go divides the cache into multiple independent shards, each with its own mutex, to reduce lock contention under high concurrency; this is a horizontal scaling pattern within a single process.
  • Janitor cleanup pattern (background goroutine for maintenance) — Instead of checking expiration on each access, go-cache spawns a background janitor goroutine that periodically wakes up and removes stale items; this trades CPU (periodic wakeups) for simplicity.
  • Serialization for persistence (Items() snapshot) — cache.Items() returns a snapshot of all current items, which can be marshaled to JSON/gob and loaded back via NewFrom(); understanding the snapshot consistency model is crucial for crash recovery.
  • Lock-free reads with time.Now() staleness — Expiration checks in Get() use time.Now() without holding a write lock; this means a very recently expired item may be returned briefly after its TTL, trading strict correctness for performance.
  • bradfitz/gomemcache — Client library for Memcached protocol in Go; use when you need a networked cache backend instead of in-process storage.
  • allegro/bigcache — High-performance in-memory cache for Go with a focus on large datasets and minimal GC pressure; alternative if you need better memory efficiency at the cost of complexity.
  • jellydator/ttlcache — Another Go in-memory cache with TTL support and additional features like eviction policies; direct competitor with more knobs.
  • go-redis/redis — Redis client for Go; use when you graduate from single-machine caching to distributed, persistent cache needs.
  • golang/groupcache — Google's distributed caching library for Go; relevant if you outgrow single-machine use and need peer-to-peer cache replication.

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add concurrent stress tests and race condition detection in cache_test.go

While cache.go is thread-safe with mutex protection, cache_test.go lacks comprehensive concurrent stress tests. Add tests that hammer the cache with simultaneous Get/Set/Delete operations from many goroutines while running with 'go test -race' to catch potential race conditions in edge cases like expiration during concurrent access. This ensures thread-safety guarantees hold under real-world load patterns.

  • [ ] Add TestConcurrentGetSet with 100+ goroutines performing mixed operations
  • [ ] Add TestConcurrentDeleteExpired with simultaneous deletes and expirations
  • [ ] Add TestRaceConditionOnCleanup to verify janitor goroutine safety
  • [ ] Document running tests with '-race' flag in README.md
  • [ ] Verify sharded_test.go has equivalent concurrent coverage

Add benchmarks for Get/Set/Delete operations in cache_test.go

The repo lacks performance benchmarks despite being a performance-critical in-memory cache library. Add Go benchmark functions to measure throughput of core operations (Get, Set, Delete) and compare performance between regular Cache and Sharded variants. This helps maintainers catch performance regressions in PRs and helps users make informed decisions about cache configuration.

  • [ ] Add BenchmarkCacheGet, BenchmarkCacheSet, BenchmarkCacheDelete to cache_test.go
  • [ ] Add equivalent BenchmarkShardedGet, BenchmarkShardedSet, BenchmarkShardedDelete to sharded_test.go
  • [ ] Include benchmarks with varying dataset sizes (100, 10K, 1M items)
  • [ ] Document benchmark results and performance characteristics in README.md
  • [ ] Add instructions for running benchmarks with 'go test -bench'

Add comprehensive documentation for the Sharded cache variant in README.md

README.md only documents the basic Cache API but completely omits guidance on the Sharded variant implemented in sharded.go. Sharded caches are crucial for high-concurrency scenarios, yet users have no documentation comparing when to use Cache vs. Sharded, how to choose shard count, or performance implications. Add a dedicated section with examples.

  • [ ] Add 'Sharded Cache' section to README.md explaining use cases (high-concurrency, reduced lock contention)
  • [ ] Include example code showing NewSharded() initialization with recommended shard counts
  • [ ] Document shard count selection guidance (e.g., number of CPU cores, typical goroutine count)
  • [ ] Add performance comparison table: regular Cache vs. Sharded under different concurrency levels
  • [ ] Include gotchas like Items() behavior on sharded caches if different from regular cache

🌿Good first issues

  • Add a GetWithExpiration() method to cache.go that returns both the value and remaining TTL, similar to Redis PTTL; requires modifying the Item struct tracking and public API.
  • Implement a Peek() read-only method that retrieves items without updating internal access metrics, useful for non-eviction caches; add tests for concurrent Peek/Set contention.
  • Create a bench_test.go file with benchmarks for Set, Get, and concurrent workloads under various shard counts; compare cache.go vs sharded.go performance with 'go test -bench=.' output.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 46f4078 — Fix incorrect key in concurrent benchmarks (patrickmn)
  • 8026b57 — LICENSE: Update copyright years (patrickmn)
  • 5633e08 — LICENSE: Update years (patrickmn)
  • 9f6ff22 — Fix benchmark for-loop shadowing (patrickmn)
  • a3647f8 — Merge pull request #64 from inf-rno/master (patrickmn)
  • 0640633 — Fix race condition (inf-rno)
  • 7ac1518 — Merge pull request #46 from alexedwards/master (patrickmn)
  • ea4bd2a — LICENSE: Update copyright years (patrickmn)
  • 96426d0 — README.md: Remove the unprotected change example since it would actually need external synchronization (patrickmn)
  • dd1ed0b — README.md: Remove one level of indentation and increase 'recommended' cleanupInterval (patrickmn)

🔒Security observations

The go-cache library has a reasonable security posture for an in-memory caching library. The primary security concern is the deserialization mechanism (NewFrom) which could be exploited if loading untrusted data. Secondary concerns include potential race conditions under specific usage patterns and lack of explicit memory constraints. No critical vulnerabilities were identified based on the provided file structure. The codebase appears to be well-maintained (Go standard patterns suggest proper mutex usage), but security documentation should be enhanced to clarify safe usage patterns and limitations.

  • Medium · Potential Race Condition in Concurrent Access — cache.go, sharded.go. While go-cache uses mutexes for thread-safety, the README indicates that 'the cache can be safely used by multiple goroutines.' However, without examining the implementation details, there's a potential risk of time-of-check-time-of-use (TOCTOU) vulnerabilities if users perform check-then-act operations across multiple cache operations. Fix: Review and document the atomic guarantees provided by the cache API. Consider providing compound operations (e.g., GetAndDelete) for scenarios requiring transactional semantics. Add warnings in documentation about non-atomic multi-step operations.
  • Medium · Unsafe Deserialization via NewFrom() — cache.go (NewFrom function). The README mentions that the cache can be loaded from a file using NewFrom() with caveats noted in the documentation. Deserializing untrusted data without proper validation could lead to arbitrary code execution or denial of service if the deserialized objects are maliciously crafted. Fix: Ensure NewFrom() validates the input data structure before deserialization. Document security implications clearly. Consider implementing a signature verification mechanism for persisted cache files. Use type-safe deserialization patterns.
  • Low · Missing Input Validation on Key Names — cache.go. The cache accepts any string as a key without apparent validation. While not inherently dangerous for an in-memory cache, this could enable cache pollution attacks if keys are derived from user input without sanitization. Fix: Document expected key format and validate against the documented constraints. Consider implementing key length limits and character restrictions if keys are derived from external sources.
  • Low · No Explicit Memory Limit Enforcement — cache.go. As an in-memory cache, there's no explicit memory limit enforcement mentioned. An attacker could exhaust server memory by inserting large objects or numerous keys, leading to a denial of service. Fix: Implement and document memory usage limits. Consider adding eviction policies (LRU, LFU) beyond TTL-based expiration. Provide monitoring/metrics for memory usage. Document recommendations for appropriate cache sizing.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Mixed signals · patrickmn/go-cache — RepoPilot