GRVYDEV/Project-Lightspeed
A self contained OBS -> FTL -> WebRTC live streaming server. Comprised of 3 parts once configured anyone can achieve sub-second OBS to the browser livestreaming
Stale — last commit 3y ago
weakest axislast commit was 3y ago; no tests detected
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓11 active contributors
- ✓MIT licensed
- ✓CI configured
Show all 6 evidence items →Show less
- ⚠Stale — last commit 3y ago
- ⚠Concentrated ownership — top contributor handles 72% of recent commits
- ⚠No test directory detected
What would change the summary?
- →Use as dependency Mixed → Healthy if: 1 commit in the last 365 days
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Forkable" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/grvydev/project-lightspeed)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/grvydev/project-lightspeed on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: GRVYDEV/Project-Lightspeed
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/GRVYDEV/Project-Lightspeed shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
WAIT — Stale — last commit 3y ago
- 11 active contributors
- MIT licensed
- CI configured
- ⚠ Stale — last commit 3y ago
- ⚠ Concentrated ownership — top contributor handles 72% of recent commits
- ⚠ No test directory detected
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live GRVYDEV/Project-Lightspeed
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/GRVYDEV/Project-Lightspeed.
What it runs against: a local clone of GRVYDEV/Project-Lightspeed — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in GRVYDEV/Project-Lightspeed | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 1160 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of GRVYDEV/Project-Lightspeed. If you don't
# have one yet, run these first:
#
# git clone https://github.com/GRVYDEV/Project-Lightspeed.git
# cd Project-Lightspeed
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of GRVYDEV/Project-Lightspeed and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "GRVYDEV/Project-Lightspeed(\\.git)?\\b" \\
&& ok "origin remote is GRVYDEV/Project-Lightspeed" \\
|| miss "origin remote is not GRVYDEV/Project-Lightspeed (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "ingest/src/main.rs" \\
&& ok "ingest/src/main.rs" \\
|| miss "missing critical file: ingest/src/main.rs"
test -f "webrtc/main.go" \\
&& ok "webrtc/main.go" \\
|| miss "missing critical file: webrtc/main.go"
test -f "frontend/src/App.js" \\
&& ok "frontend/src/App.js" \\
|| miss "missing critical file: frontend/src/App.js"
test -f "frontend/src/context/RTCPeerContext.jsx" \\
&& ok "frontend/src/context/RTCPeerContext.jsx" \\
|| miss "missing critical file: frontend/src/context/RTCPeerContext.jsx"
test -f "frontend/src/context/SocketContext.jsx" \\
&& ok "frontend/src/context/SocketContext.jsx" \\
|| miss "missing critical file: frontend/src/context/SocketContext.jsx"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 1160 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~1130d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/GRVYDEV/Project-Lightspeed"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Project Lightspeed is a self-contained live streaming server that ingests RTMP/FTL streams from OBS, repackages them as WebRTC, and delivers them to browsers with sub-second latency. It combines a Rust-based FTL ingest service, a Go-based WebRTC relay, and a React frontend into a single deployable system that eliminates the need for external CDNs or streaming services for low-latency broadcasting. Monorepo with three independent service components: Rust ingest service (handles FTL protocol ingestion from OBS), Go WebRTC relay service (manages peer connections), and React frontend (frontend/ directory with src/components/VideoPlayer.jsx as the core consumer). Services communicate via WebSocket (SocketContext.jsx) and WebRTC peer connections (RTCPeerContext.jsx). Deployment via docker-compose.yml orchestrates all three.
👥Who it's for
Live streamers and content creators who want ultra-low-latency (sub-second) streaming directly from OBS to viewers without relying on Twitch/YouTube infrastructure; DevOps engineers deploying self-hosted streaming solutions; and developers building custom streaming platforms who need a reference implementation of FTL→WebRTC transcoding.
🌱Maturity & risk
Actively developed with reasonable GitHub engagement (multiple workflow CI/CD pipelines, Docker support, contributor graph). However, the codebase spans three separate language ecosystems (Rust, Go, JavaScript) which increases maintenance complexity. Production-ready for self-hosted deployments, but deployment and tuning require understanding all three components.
High architectural complexity: three independent codebases (Rust ingest, Go WebRTC, React frontend) must coordinate perfectly; missing a single component breaks the entire pipeline. Dependencies are spread across three ecosystems with no lock files visible in the file list for the Rust/Go portions, making reproducible builds uncertain. Single-repo structure means a breaking change in one component can cascade across all three.
Active areas of work
Active CI/CD workflows for ingest and webrtc services (.github/workflows/ingest.yml, webrtc.yml). Recent activity focuses on Docker containerization (Dockerfile in frontend/) and Ubuntu installation automation (contrib/ubuntu_installer/). Community contributions visible in FUNDING.yml suggests ongoing maintenance.
🚀Get running
git clone https://github.com/GRVYDEV/Project-Lightspeed.git
cd Project-Lightspeed
docker-compose up
Or manually: navigate to frontend/, run npm install && npm start for React dev server on port 3000. Requires separate startup of Rust ingest and Go WebRTC services (see their respective directories for build instructions).
Daily commands:
Frontend dev: cd frontend && npm install && npm start (starts on http://localhost:3000). Production build: npm run build. Full stack: docker-compose up from repo root. Ingest and WebRTC services start via their respective Dockerfile entries.
🗺️Map of the codebase
ingest/src/main.rs— Entry point for the FTL ingest server that receives OBS streams; all incoming broadcast data flows through this module.webrtc/main.go— Entry point for the WebRTC server that handles H.264 codec negotiation and peer connections; core streaming delivery mechanism.frontend/src/App.js— Root React component that orchestrates socket context and WebRTC peer connections for the viewer UI.frontend/src/context/RTCPeerContext.jsx— Manages WebRTC peer connection state and lifecycle; critical for establishing video stream between server and browser.frontend/src/context/SocketContext.jsx— Manages signaling socket connection to WebRTC server; handles offer/answer exchange and ICE candidates.webrtc/internal/signal/h264.go— H.264 NAL unit parsing and codec configuration; essential for extracting SPS/PPS and maintaining video stream integrity.ingest/src/ftl_codec.rs— FTL protocol codec implementation; defines how OBS stream data is deserialized and routed internally.
🛠️How to make changes
Add a new Frontend UI Component
- Create a new .jsx file in frontend/src/components/ (
frontend/src/components/YourComponent.jsx) - Import and use RTCPeerContext or SocketContext for WebRTC/messaging state (
frontend/src/context/RTCPeerContext.jsx) - Add styled-components CSS in frontend/src/styles/yourComponentStyles.js (
frontend/src/styles/yourComponentStyles.js) - Import and render the component in App.js (
frontend/src/App.js)
Add a new WebRTC Server Handler or Protocol Extension
- Add new signaling message type in webrtc/internal/signal/signal.go (
webrtc/internal/signal/signal.go) - Extend WebSocket client message router in webrtc/ws/client.go (
webrtc/ws/client.go) - Update hub.go if the new message type requires broadcasting or state management (
webrtc/ws/hub.go) - Recompile and redeploy the webrtc service via Docker or direct Go build (
webrtc/Dockerfile)
Add Ingest-side Stream Processing or Transcoding
- Extend the FTL codec parsing in ingest/src/ftl_codec.rs to handle new frame types or metadata (
ingest/src/ftl_codec.rs) - Modify ingest/src/connection.rs to process or filter decoded frames before shared memory write (
ingest/src/connection.rs) - Update CLI arguments in ingest/src/cli.yml if new tuning options are needed (
ingest/src/cli.yml) - Recompile and redeploy the ingest service via Docker or Cargo (
ingest/Dockerfile)
Configure Multi-Stream or Load Balancing
- Update docker-compose.yml to scale ingest and webrtc services with unique ports (
docker-compose.yml) - Modify frontend/public/config.json to include multiple WebRTC server endpoints (
frontend/public/config.json) - Update SocketContext to select a server endpoint based on availability or load (
frontend/src/context/SocketContext.jsx)
🔧Why these technologies
- Rust (Ingest) — Memory-safe, high-performance stream parsing with minimal latency overhead for real-time FTL protocol handling.
- Go (WebRTC) — Excellent concurrency primitives (goroutines) for handling many simultaneous WebRTC peer connections and broadcasts.
- React + Styled-Components (Frontend) — Declarative UI with context-based state management for complex WebRTC peer and signaling lifecycle.
- WebRTC + H.264 — Sub-second end-to-end latency via peer-to-peer video delivery and hardware-decoded codec support.
- Shared Memory (IPC) — Zero-copy frame passing between ingest and WebRTC server; critical for ultra-low latency.
- Docker Compose — Simplifies multi-service deployment (ingest, WebRTC, frontend) with isolated containers and networking.
⚖️Trade-offs already made
-
Shared memory for ingest ↔ WebRTC communication
- Why: Eliminates serialization and network overhead, achieving sub-millisecond frame propagation.
- Consequence: Requires both services on the same host; tight coupling reduces horizontal scalability.
-
H.264 codec over VP8/VP9
- Why: Lower latency, better browser hardware decode support, and smaller bandwidth footprint.
- Consequence: Licensing considerations in some jurisdictions; limited to H.264 codec ecosystem.
-
WebRTC instead of RTMP/HLS
- Why: Achieves sub-second latency and peer-to-peer efficiency.
- Consequence: Higher CPU cost per viewer on server (encode per peer); no built-in multi-bitrate or ABR.
-
Synchronous FTL parsing in Rust (blocking I/O)
- Why: Simplicity and predictable frame-by-frame latency for single stream.
- Consequence: Cannot easily handle multiple parallel ingest streams without threading; current design assumes one broadcaster.
🚫Non-goals (don't propose these)
- Multi-bitrate adaptive streaming (ABR/HLS-style)
- Recording and archival
- Authentication and access control
- Stream transcoding or format conversion beyond H.264
- Horizontal scaling across multiple machines (shared memory binds ingest and WebRTC to same host)
- Support for protocols other than FTL (RTMP, MPEG-TS)
- Built-in clustering or redundancy
- Browser-based broadcasting (WebRTC send from browser)
🪤Traps & gotchas
Config-driven service discovery: frontend/public/config.json must point to correct ingest/WebRTC service URLs or the entire pipeline fails silently; no validation in React code. Three independent build systems: Rust, Go, and Node.js dependencies are not synchronized; a version mismatch in WebRTC protocol between Go and browser WebRTC API will cause mysterious connection failures. Docker networking: services communicate via docker-compose service names (e.g., 'ingest', 'webrtc') in the compose file, but standalone deployments require manual network configuration. FTL protocol specifics: OBS FTL implementation may vary by version; ingest service version must match OBS FTL client expectations. No environment variable documentation: .env file exists but its required keys are not listed in README.
🏗️Architecture
💡Concepts to learn
- FTL (Faster-Than-Light) Protocol — The ingest protocol that OBS uses to send video to the Rust service; understanding FTL handshake, frame ordering, and metadata is essential for diagnosing ingest failures
- WebRTC (STUN/TURN/ICE) — The transport mechanism for delivering video to browsers; ICE candidate gathering, STUN/TURN traversal, and SDP negotiation in RTCPeerContext.jsx depend on understanding these protocols
- SDP (Session Description Protocol) Negotiation — WebRTC peers exchange SDP offers/answers to negotiate codec, bitrate, and transport; the Go relay and React context must correctly generate and parse SDP to establish streams
- Sub-second Latency via Frame Batching — Achieving <1s latency requires aggressive frame buffering and network priority settings distinct from traditional RTMP→HLS (10-30s latency); understanding where buffering decisions happen in the video pipeline is critical
- WebSocket for Signaling vs. RTP for Media — SocketContext.jsx uses WebSocket for control (SDP exchange, viewer state), while media flows over WebRTC RTP; mixing these transports requires careful state synchronization
- Docker Service Networking — docker-compose.yml orchestrates three services that must discover each other by hostname; debugging network failures requires understanding Docker DNS and bridge networking
- React Context API for Real-time State — RTCPeerContext and SocketContext manage mutable connection state across the React component tree without Redux; understanding when Context triggers re-renders is essential for performance
🔗Related repos
glimesh/glimesh.tv— Open-source Twitch alternative also implementing RTMP ingest and low-latency delivery; similar architecture but with Elixir/Phoenix backend instead of Rust/Goowncast/owncast— Self-hosted streaming server supporting RTMP input and HLS/WebRTC output; overlaps with Lightspeed's goals but uses Go throughout instead of mixed-language stackpion/webrtc— The Go WebRTC library that the relay service likely depends on; reference for understanding SDP negotiation and ICE candidate handling in the Go componentobsproject/obs-studio— OBS is the client generating FTL streams consumed by this project's Rust ingest; understanding FTL protocol support in OBS is critical for testingGum-Joe/ttv-tools— Twitch streaming toolkit ecosystem; may have complementary tools for OBS automation and multi-streaming that pairs with Lightspeed deployments
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add WebRTC peer connection unit tests for RTCPeerContext.jsx
The RTCPeerContext.jsx manages critical WebRTC peer connections but has no corresponding test file. Given this is a real-time streaming application where connection stability is paramount, unit tests for context initialization, peer connection lifecycle, and error handling would prevent regressions. This is especially important since the ingest and webrtc workflows exist but frontend tests are missing.
- [ ] Create frontend/src/context/tests/RTCPeerContext.test.js
- [ ] Add tests for context initialization and WebRTC peer connection setup
- [ ] Add tests for connection state changes and cleanup/unmounting
- [ ] Add tests for error scenarios (connection failures, ICE failures)
- [ ] Update frontend/src/App.test.js to reference the new context tests
Add GitHub Actions workflow for frontend React app linting and build verification
The repo has ingest.yml and webrtc.yml workflows but no frontend CI pipeline. The frontend uses ESLint and react-scripts build, but there's no automated check ensuring code quality or build success on PRs. This prevents catching frontend regressions before merge and is inconsistent with having CI for other components.
- [ ] Create .github/workflows/frontend.yml
- [ ] Add Node.js setup step and dependency caching (frontend/package-lock.json)
- [ ] Add ESLint check step using rules in frontend/.eslintrc.json
- [ ] Add build verification step (npm run build)
- [ ] Optionally add frontend/src/App.test.js test execution step
Add integration tests for VideoPlayer.jsx and Plyr configuration
VideoPlayer.jsx is a core component that initializes Plyr (^3.6.3 dependency) for WebRTC stream playback, but has no tests. Without tests, changes to video player initialization, stream switching, or error handling could silently break the viewing experience. Tests should cover HLS/WebRTC source handling and Plyr initialization.
- [ ] Create frontend/src/components/tests/VideoPlayer.test.js
- [ ] Add tests for Plyr initialization with WebRTC sources
- [ ] Add tests for prop changes (source updates, quality changes)
- [ ] Add tests for error state rendering and recovery
- [ ] Mock socket events from SocketContext.jsx to simulate stream state changes
🌿Good first issues
- Add unit tests for RTCPeerContext.jsx WebRTC state management: frontend/src/context/RTCPeerContext.jsx handles critical peer connection logic but has no visible test file; adding tests in frontend/src/tests/contexts/ would catch signaling bugs early
- Document required environment variables in README: .env file exists but its required keys (API endpoints, port numbers, TLS settings) are not documented; adding an .env.example file and ENV section to README would unblock deployments
- Add error handling and retry logic to SocketContext.jsx connection: frontend/src/context/SocketContext.jsx likely has no reconnect logic for dropped WebSocket connections, causing viewer freeze; implementing exponential backoff would improve stability
⭐Top contributors
Click to expand
Top contributors
- @GRVYDEV — 67 commits
- @EnigmaCurry — 7 commits
- @nitrag — 4 commits
- @Crowdedlight — 4 commits
- @SpoopyTim — 3 commits
📝Recent commits
Click to expand
Recent commits
2ab01cc— Make the websocket URL more configurable (#78) (mensinda)5434297— Add ICE server configuration support (#77) (mensinda)20b9b95— Merge pull request #61 from SpoopyTim/fix/manual-install-docs (GRVYDEV)da302a2— Merge pull request #63 from SpoopyTim/contrib/ubuntu_installer-overwritenotice (GRVYDEV)d9db287— Update README.md (SpoopyTim)1a9740d— Updated manual install instructions (SpoopyTim)b2a6552— Merge pull request #59 from SpoopyTim/fix/compose-context (GRVYDEV)2104dbd— Rename context for frontend (SpoopyTim)c6d63cb— Merge pull request #45 from modwizcode/patch-1 (GRVYDEV)d65e879— Merge pull request #56 from GRVYDEV/gg-revival (GRVYDEV)
🔒Security observations
- High · Outdated React and Dependencies with Known Vulnerabilities —
frontend/package.json. The project uses React 17.0.1 and react-scripts 4.0.1, which are significantly outdated (current major version is 18+). Multiple known CVEs exist for these versions, including XSS vulnerabilities and dependency chain exploits in lodash, serialize-javascript, and other transitive dependencies. Fix: Update to React 18.x or 19.x and react-scripts to the latest stable version. Run 'npm audit' and address all flagged vulnerabilities. Consider using 'npm audit fix' and review breaking changes. - High · Incomplete docker-compose Command Configuration —
docker-compose.yml, lightspeed-webrtc service. The docker-compose.yml has a truncated command for lightspeed-webrtc service that appears incomplete ('--ports=20000-2010' is cut off). This could lead to misconfiguration, unintended port exposure, or service failure. Fix: Complete the command configuration and verify all port ranges are properly defined. Ensure the full command is syntactically valid. - High · Environment Variables File Exposure Risk —
.env file and docker-compose.yml. The project uses .env file for configuration including WEBSOCKET_HOST, INGEST_PORT, and WEB_PORT. If .env is accidentally committed to version control or exposed in Docker images, sensitive configuration could be compromised. No evidence of .env in .gitignore is visible in the provided structure. Fix: Ensure .env is in .gitignore. Use environment variable injection in deployment pipelines rather than file-based secrets. Consider using Docker secrets or orchestration platform (K8s) secret management. - Medium · Missing Security Headers Configuration —
frontend/Dockerfile, frontend/docker/entrypoint.sh. The frontend React application has no visible security header configuration (no nginx.conf, helmet middleware, or similar) to protect against XSS, CSRF, clickjacking, and other web-based attacks. Fix: Implement security headers: Content-Security-Policy, X-Frame-Options, X-Content-Type-Options, Strict-Transport-Security. If using nginx, configure appropriate headers in the reverse proxy configuration. - Medium · Publicly Exposed Service Ports —
docker-compose.yml. The docker-compose configuration exposes multiple ports (INGEST_PORT, WEB_PORT, and WebRTC ports 20000-20100) directly from the host. There is no evidence of firewall rules, rate limiting, or authentication on the ingest service. Fix: Implement network segmentation. Use firewall rules to restrict access to ingest port (8084). Consider placing services behind a reverse proxy with authentication/authorization. Implement rate limiting on WebSocket and ingest endpoints. - Medium · No Input Validation Visible in Frontend Components —
frontend/src/components/LiveChat.jsx. The frontend includes components like LiveChat.jsx that likely handle user input (chat messages), but no input validation, sanitization, or XSS protection patterns are evident in the provided file structure. Fix: Implement client-side input validation and sanitization. Use DOMPurify or similar library to sanitize user inputs before rendering. Implement server-side validation for all user inputs on the WebRTC and ingest services. - Medium · WebSocket Lack of Authentication Visible —
frontend/src/context/SocketContext.jsx, webrtc/ws/client.go, webrtc/internal/signal/signal.go. The project uses WebSocket communication (SocketContext.jsx) but no authentication mechanism is evident from the file structure. This could allow unauthorized access to the streaming and chat functionality. Fix: Implement WebSocket authentication using JWT tokens or similar. Validate stream keys before allowing WebRTC connections. Implement authorization checks for chat and control messages. - Medium · Missing CORS Configuration Visibility —
webrtc/internal/signal/http.go, webrtc/main.go. No CORS configuration is visible in the provided files. The frontend and backend services run on different ports, making CORS misconfiguration a potential vulnerability (too permissive allowing unauthorized origins). Fix: Implement strict CORS policies. Only allow requests from known frontend
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.