RepoPilotOpen in app →

eranyanay/1m-go-websockets

handling 1M websockets connections in Go

Mixed

Stale — last commit 4y ago

weakest axis
Use as dependencyConcerns

copyleft license (AGPL-3.0) — review compatibility; last commit was 4y ago…

Fork & modifyMixed

no tests detected; no CI workflows detected…

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isMixed

last commit was 4y ago; no CI workflows detected

  • 3 active contributors
  • AGPL-3.0 licensed
  • Stale — last commit 4y ago
Show all 8 evidence items →
  • Small team — 3 contributors active in recent commits
  • Concentrated ownership — top contributor handles 67% of recent commits
  • AGPL-3.0 is copyleft — check downstream compatibility
  • No CI workflows detected
  • No test directory detected
What would change the summary?
  • Use as dependency ConcernsMixed if: relicense under MIT/Apache-2.0 (rare for established libs)
  • Fork & modify MixedHealthy if: add a test suite
  • Deploy as-is MixedHealthy if: 1 commit in the last 180 days

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Great to learn from" badge

Paste into your README — live-updates from the latest cached analysis.

RepoPilot: Great to learn from
[![RepoPilot: Great to learn from](https://repopilot.app/api/badge/eranyanay/1m-go-websockets?axis=learn)](https://repopilot.app/r/eranyanay/1m-go-websockets)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/eranyanay/1m-go-websockets on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: eranyanay/1m-go-websockets

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/eranyanay/1m-go-websockets shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

WAIT — Stale — last commit 4y ago

  • 3 active contributors
  • AGPL-3.0 licensed
  • ⚠ Stale — last commit 4y ago
  • ⚠ Small team — 3 contributors active in recent commits
  • ⚠ Concentrated ownership — top contributor handles 67% of recent commits
  • ⚠ AGPL-3.0 is copyleft — check downstream compatibility
  • ⚠ No CI workflows detected
  • ⚠ No test directory detected

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live eranyanay/1m-go-websockets repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/eranyanay/1m-go-websockets.

What it runs against: a local clone of eranyanay/1m-go-websockets — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in eranyanay/1m-go-websockets | Confirms the artifact applies here, not a fork | | 2 | License is still AGPL-3.0 | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 1392 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>eranyanay/1m-go-websockets</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of eranyanay/1m-go-websockets. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/eranyanay/1m-go-websockets.git
#   cd 1m-go-websockets
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of eranyanay/1m-go-websockets and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "eranyanay/1m-go-websockets(\\.git)?\\b" \\
  && ok "origin remote is eranyanay/1m-go-websockets" \\
  || miss "origin remote is not eranyanay/1m-go-websockets (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(AGPL-3\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"AGPL-3\\.0\"" package.json 2>/dev/null) \\
  && ok "license is AGPL-3.0" \\
  || miss "license drift — was AGPL-3.0 at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "4_optimize_gobwas/server.go" \\
  && ok "4_optimize_gobwas/server.go" \\
  || miss "missing critical file: 4_optimize_gobwas/server.go"
test -f "4_optimize_gobwas/epoll.go" \\
  && ok "4_optimize_gobwas/epoll.go" \\
  || miss "missing critical file: 4_optimize_gobwas/epoll.go"
test -f "3_optimize_ws_goroutines/server.go" \\
  && ok "3_optimize_ws_goroutines/server.go" \\
  || miss "missing critical file: 3_optimize_ws_goroutines/server.go"
test -f "client.go" \\
  && ok "client.go" \\
  || miss "missing critical file: client.go"
test -f "README.md" \\
  && ok "README.md" \\
  || miss "missing critical file: README.md"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 1392 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~1362d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/eranyanay/1m-go-websockets"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

A case study demonstrating how to handle 1 million concurrent WebSocket connections in Go using pure Go implementation with minimal RAM footprint (~1GB). It combines OS-level optimizations (epoll, ulimit tuning), efficient goroutine patterns, and the gobwas/ws library to show practical techniques for high-concurrency real-time systems. Linear progression of 5 increasing-complexity examples: 0_simple_web_server/ (baseline HTTP), 1_ws_example/ (basic WebSocket), 2_ws_ulimit/ (OS tuning), 3_optimize_ws_goroutines/ (goroutine pooling + epoll), 4_optimize_gobwas/ (gobwas library + epoll). Each folder is self-contained with server.go plus README explaining the optimization. Shared client.go for load testing across all examples.

👥Who it's for

Go backend engineers building high-scale real-time systems (chat servers, live dashboards, IoT platforms) who need to understand how to architect WebSocket servers that handle millions of concurrent connections without excessive memory or CPU overhead.

🌱Maturity & risk

Experimental reference implementation, not production-ready. Created for a 2019 GopherCon Israel talk, this is explicitly a case study and teaching tool rather than a maintained library. No test suite, no CI/CD pipeline visible, and no activity suggested beyond the initial implementation—treat as educational code, not production dependency.

High risk for production use: single maintainer, no test coverage, explicitly non-production code per README, minimal dependencies (gobwas/ws, gorilla/websocket) but no version pinning beyond go.mod. Relies on Linux-specific syscalls (epoll in 3_optimize_ws_goroutines/epoll.go and 4_optimize_gobwas/epoll.go) making it non-portable. No issue tracker activity or PR process documented.

Active areas of work

No active development. This is a static reference implementation from 2019 accompanying a GopherCon talk. No visible PRs, issues, or recent commits in the provided metadata.

🚀Get running

git clone https://github.com/eranyanay/1m-go-websockets.git
cd 1m-go-websockets
go mod download
./setup.sh  # Docker-based setup for multi-instance testing
go run 4_optimize_gobwas/server.go  # Run the optimized server
go run client.go -conn=10000  # Connect 10k clients in separate terminal

Daily commands: Run individual examples in sequence: go run 0_simple_web_server/server.go, go run 1_ws_example/server.go, etc. For load testing, run server in one terminal, then go run client.go -conn=100000 in another. Docker multi-instance setup via ./setup.sh for production-scale testing. Profiling: ./pprof_goroutine.sh and ./pprof_heap.sh included.

🗺️Map of the codebase

  • 4_optimize_gobwas/server.go — Final optimized WebSocket server using gobwas/ws library with epoll multiplexing; demonstrates the production-ready implementation handling 1M connections efficiently.
  • 4_optimize_gobwas/epoll.go — Linux epoll-based I/O multiplexing implementation that reduces goroutine overhead by batching socket events; core to achieving 1M connection scalability.
  • 3_optimize_ws_goroutines/server.go — Intermediate optimization showing goroutine pooling and message buffering patterns that bridge naive WebSocket handling and full epoll optimization.
  • client.go — Concurrent WebSocket client used for load testing; essential for understanding the connection establishment and message flow patterns validated against.
  • README.md — Complete walkthrough of optimization journey from simple server to 1M connections; required context for understanding progression through each folder.
  • go.mod — Declares dependency on gobwas/ws (high-performance WebSocket library) over gorilla/websocket; critical architectural choice for performance.
  • setup.sh — Docker-based multi-instance deployment wrapper; shows how to run and validate the server at scale against OS limits.

🛠️How to make changes

Add a New Optimization Step

  1. Create a new numbered folder (e.g., 5_optimize_something/) following the pattern (5_optimize_something/server.go)
  2. Copy and refactor the previous step's server.go, introducing a single optimization focus (5_optimize_something/server.go)
  3. If using new I/O techniques, implement or extend epoll.go (5_optimize_something/epoll.go)
  4. Write a README explaining the optimization, bottleneck addressed, and performance gains vs. prior step (5_optimize_something/README.md)
  5. Test with client.go to validate connection limits and memory usage, updating setup.sh if needed (setup.sh)

Benchmark Against a New WebSocket Library

  1. Add new library to go.mod alongside gobwas/ws and gorilla/websocket (go.mod)
  2. Create a new variant server (e.g., 4_optimize_newlib/server.go) using the new library's API (4_optimize_newlib/server.go)
  3. Reuse or adapt epoll.go if the library supports custom event loops (4_optimize_newlib/epoll.go)
  4. Run client.go and pprof scripts to compare goroutine count, heap allocation, and max connections (client.go)

Deploy to Different Linux Kernel

  1. Review epoll platform assumptions in 3_optimize_ws_goroutines/epoll.go and 4_optimize_gobwas/epoll.go (4_optimize_gobwas/epoll.go)
  2. Ensure golang.org/x/sys provides correct syscall wrappers for target kernel (go.mod)
  3. Adjust setup.sh Docker base image or host kernel version if epoll behavior differs (setup.sh)
  4. Run pprof_goroutine.sh and pprof_heap.sh to verify memory and concurrency profiles match expectations (pprof_goroutine.sh)

🪤Traps & gotchas

Linux-only: epoll syscalls in 3_optimize_ws_goroutines/epoll.go and 4_optimize_gobwas/epoll.go will not compile on Windows/macOS—requires Linux runtime. OS tuning required: default ulimit and conntrack table sizes will fail at scale; 2_ws_ulimit/README.md and 4_optimize_gobwas/increase_conntrack_table document required changes but these must be manually applied before testing. Docker setup via setup.sh may require additional kernel parameters (vm.max_map_count, etc.) on host. pprof profiling scripts assume port 6060 is available.

🏗️Architecture

💡Concepts to learn

  • epoll (Linux event multiplexing) — Core technique in 3_optimize_ws_goroutines/epoll.go and 4_optimize_gobwas/epoll.go that allows handling millions of connections without 1 goroutine per connection, reducing RAM by orders of magnitude.
  • File descriptor limits (ulimit) — Critical OS constraint preventing >1k concurrent connections by default; 2_ws_ulimit/ example exists solely to demonstrate and bypass this bottleneck.
  • Goroutine pooling / worker pattern — 3_optimize_ws_goroutines/server.go shows how to decouple goroutine count from connection count using a fixed worker pool, enabling 1M connections with manageable goroutine overhead.
  • Socket buffer tuning (SO_RCVBUF, SO_SNDBUF) — Default kernel socket buffers waste memory per connection; 2_ws_ulimit/README.md demonstrates tuning these for millions of idle connections with minimal footprint.
  • Connection tracking table (conntrack) — Linux netfilter conntrack limits concurrent connections at kernel level; 4_optimize_gobwas/increase_conntrack_table documents this often-missed constraint for 1M connection scenarios.
  • Zero-copy I/O with syscall.RawConn — 3_optimize_ws_goroutines/epoll.go and 4_optimize_gobwas/epoll.go avoid Go's overhead by using raw file descriptors with epoll, reducing syscall context switches.
  • gorilla/websocket — Pure Go WebSocket library used in early examples; reference for comparing gobwas/ws performance tradeoffs.
  • gobwas/ws — Lightweight WebSocket implementation used in final optimized example (4_optimize_gobwas/); key dependency for 1M connection goal.
  • valyala/fasthttp — High-performance HTTP server library for Go; demonstrates similar optimizations (memory pooling, efficient I/O) applicable to WebSocket scenarios.
  • cloudwego/netpoll — ByteDance's I/O multiplexing library for Go using epoll/kqueue; alternative to manual epoll syscall integration shown in this repo.
  • xtaci/kcptun — High-concurrency UDP tunneling in Go; demonstrates similar scaling patterns and epoll-based multiplexing for comparison.

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add benchmark comparison suite comparing gorilla/websocket vs gobwas/ws implementations

The repo demonstrates two different websocket libraries (gorilla/websocket in earlier examples, gobwas/ws in optimized versions) but lacks quantitative benchmarks comparing them. A comprehensive benchmark suite would help contributors and users understand the performance trade-offs between libraries across connection counts, memory usage, and throughput. This is critical since the entire repo's purpose is demonstrating high-scale websocket handling.

  • [ ] Create benchmark_test.go at repo root that implements Go testing benchmarks (BenchmarkGorillaWS, BenchmarkGobwasWS)
  • [ ] Add benchmarks that test: connection establishment rate, message throughput per 1K/10K/100K/1M connections, memory allocation patterns, goroutine count scaling
  • [ ] Document benchmark results in 4_optimize_gobwas/README.md showing performance improvements from optimization steps (steps 0-4)
  • [ ] Reference specific server.go files from each numbered directory (0_simple_web_server through 4_optimize_gobwas) in the benchmark suite

Add integration tests validating each optimization stage against the client.go

The repo includes client.go for testing but lacks integration tests that validate each optimization stage (0→1→2→3→4) actually works and handles the expected connection counts. This prevents regressions when contributors modify epoll.go or server implementations and makes it clear what success criteria each stage should meet.

  • [ ] Create integration_test.go that programmatically runs each server stage (using subdirectory server executables) against configurable connection counts (100, 1K, 10K)
  • [ ] Use client.go's connection logic to spawn N concurrent websocket clients and verify: successful connections, message echo functionality, graceful disconnection
  • [ ] Add test output validation showing memory usage and goroutine count metrics for each stage to demonstrate progression toward 1M connection target
  • [ ] Create test configuration that scales up (currently hardcoded in client.go) to validate each stage handles its target load before moving to next optimization

Extract common epoll implementation into shared internal package to reduce duplication

Both 3_optimize_ws_goroutines/epoll.go and 4_optimize_gobwas/epoll.go are nearly identical implementations of epoll wrapper logic. This duplication makes maintenance harder and obscures the actual differences between stages 3 and 4. Creating a shared internal package would clarify that stages 3-4 differ only in websocket library choice, not epoll usage.

  • [ ] Create internal/epoll/epoll.go containing the common epoll interface and Linux syscall wrappers (both files currently have duplicate EpollCreate, EpollCtl, EpollWait, and event handling logic)
  • [ ] Update 3_optimize_ws_goroutines/epoll.go to import and use internal/epoll package
  • [ ] Update 4_optimize_gobwas/epoll.go to import and use internal/epoll package
  • [ ] Update README.md in both directories to clarify that stage 3→4 progression is purely about library optimization (gorilla→gobwas), not epoll mechanism changes

🌿Good first issues

  • Add benchmark tests for each example (0_simple_web_server/ through 4_optimize_gobwas/) that measure connection count, memory usage, and latency—none exist currently despite being a performance showcase.
  • Document epoll() integration in 3_optimize_ws_goroutines/epoll.go and 4_optimize_gobwas/epoll.go with inline code comments explaining syscall parameters and event loop lifecycle—the mechanism is opaque to readers.
  • Create a comparison chart (Markdown table in README.md) showing memory/CPU/goroutines across all 5 examples with specific metrics (e.g., 'Example 2: 100k connections = 50MB RAM, 200 goroutines') to make optimization gains concrete.

Top contributors

Click to expand
  • [@Eran Yanay](https://github.com/Eran Yanay) — 6 commits
  • @eranyanay — 2 commits
  • @Eran — 1 commits

📝Recent commits

Click to expand
  • 0649959 — Update README.md (eranyanay)
  • 6758771 — Added gomod, removed dead code (eranyanay)
  • 87d86c0 — Fix #9: memory leak (Eran)
  • e1f432f — Update README.md (Eran Yanay)
  • 85b5ed0 — Update LICENSE (Eran Yanay)
  • 602f4e7 — Update client.go (Eran Yanay)
  • 729869b — Update README.md (Eran Yanay)
  • 5ece3c2 — Update README.md (Eran Yanay)
  • 10d3d5d — Initial commit (Eran Yanay)

🔒Security observations

  • High · Outdated Go Version — go.mod. The project specifies 'go 1.12' which was released in February 2019 and is now significantly outdated. This version lacks critical security patches, performance improvements, and bug fixes available in modern Go versions (1.21+). Using outdated Go versions exposes the application to known security vulnerabilities. Fix: Update to the latest stable Go version (1.21 or newer). Modify go.mod to specify 'go 1.21' or later and rebuild all dependencies.
  • High · Vulnerable Dependency Versions — go.mod, go.sum. Multiple dependencies are pinned to outdated versions from 2018-2019: golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527 (2020), github.com/gobwas/ws v1.0.3 (2018), and github.com/gorilla/websocket v1.4.1 (2019). These versions likely contain known security vulnerabilities. golang.org/x/sys is particularly critical as it handles low-level system operations. Fix: Update all dependencies to their latest stable versions: 'go get -u' and review changelog for breaking changes. Pay special attention to golang.org/x/sys which should match your current Go version.
  • High · No Input Validation on WebSocket Messages — 1_ws_example/server.go, 3_optimize_ws_goroutines/server.go, 4_optimize_gobwas/server.go. Based on the file structure and purpose (handling 1M connections), the WebSocket server implementations likely accept and process incoming messages without apparent input validation mechanisms. This could expose the system to malicious payloads, buffer overflow attacks, or denial of service attacks. Fix: Implement comprehensive input validation for all WebSocket messages including: size limits, type validation, content sanitization, and rate limiting per connection.
  • Medium · Missing Origin/CORS Validation — 0_simple_web_server/server.go, 1_ws_example/server.go, 2_ws_ulimit/server.go, 3_optimize_ws_goroutines/server.go, 4_optimize_gobwas/server.go. WebSocket implementations in this codebase likely lack proper origin validation and CORS checks. This could allow cross-site WebSocket hijacking (CSWSH) attacks where malicious websites establish WebSocket connections to the server on behalf of users. Fix: Implement origin validation by checking the 'Origin' header in WebSocket upgrade requests. Only allow connections from whitelisted domains.
  • Medium · No TLS/SSL Implementation Visible — All server.go files. There is no indication of TLS/SSL encryption in the WebSocket implementations. WebSockets over unencrypted connections (ws:// instead of wss://) expose all message content to man-in-the-middle attacks and eavesdropping. Fix: Implement WSS (WebSocket Secure) using TLS certificates. Configure HTTPS and use wss:// protocol. Use self-signed certificates for development and valid certificates for production.
  • Medium · Potential Resource Exhaustion Vulnerability — 2_ws_ulimit/server.go, 3_optimize_ws_goroutines/server.go, 4_optimize_gobwas/server.go. While the codebase is designed to handle 1M connections efficiently, there's no visible implementation of connection limits, timeout mechanisms, or per-connection resource quotas. An attacker could potentially exhaust system resources through rapid connection establishment or large message flooding. Fix: Implement: (1) Maximum concurrent connections limit, (2) Per-connection bandwidth/message rate limiting, (3) Connection timeout policies, (4) Message size limits, (5) Memory limits per connection.
  • Medium · No Authentication/Authorization Mechanism — All server implementations. The WebSocket servers appear to lack authentication and authorization mechanisms. Any client can establish a connection and potentially access or modify data without verification of identity or permissions. Fix: Implement authentication (JWT tokens, session validation) before accepting WebSocket connections. Verify user identity during the HTTP upgrade phase. Implement role-based authorization for different message types.
  • Low · No Security Headers Configuration — undefined. undefined Fix: undefined

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Mixed signals · eranyanay/1m-go-websockets — RepoPilot