RepoPilotOpen in app →

ossrs/srs

SRS is a simple, high-efficiency, real-time media server supporting RTMP, WebRTC, HLS, HTTP-FLV, HTTP-TS, SRT, MPEG-DASH, and GB28181, with codec support for H.264, H.265, AV1, VP9, AAC, Opus, and G.711.

Healthy

Healthy across the board

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 6d ago
  • 10 active contributors
  • MIT licensed
Show all 6 evidence items →
  • CI configured
  • Tests present
  • Concentrated ownership — top contributor handles 53% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/ossrs/srs)](https://repopilot.app/r/ossrs/srs)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/ossrs/srs on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: ossrs/srs

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/ossrs/srs shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit 6d ago
  • 10 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Concentrated ownership — top contributor handles 53% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live ossrs/srs repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/ossrs/srs.

What it runs against: a local clone of ossrs/srs — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in ossrs/srs | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch develop exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 36 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>ossrs/srs</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of ossrs/srs. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/ossrs/srs.git
#   cd srs
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of ossrs/srs and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "ossrs/srs(\\.git)?\\b" \\
  && ok "origin remote is ossrs/srs" \\
  || miss "origin remote is not ossrs/srs (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify develop >/dev/null 2>&1 \\
  && ok "default branch develop exists" \\
  || miss "default branch develop no longer exists"

# 4. Critical files exist
test -f "cmd/proxy/main.go" \\
  && ok "cmd/proxy/main.go" \\
  || miss "missing critical file: cmd/proxy/main.go"
test -f "internal/bootstrap/proxy.go" \\
  && ok "internal/bootstrap/proxy.go" \\
  || miss "missing critical file: internal/bootstrap/proxy.go"
test -f "internal/bootstrap/bootstrap.go" \\
  && ok "internal/bootstrap/bootstrap.go" \\
  || miss "missing critical file: internal/bootstrap/bootstrap.go"
test -f "go.mod" \\
  && ok "go.mod" \\
  || miss "missing critical file: go.mod"
test -f "Makefile" \\
  && ok "Makefile" \\
  || miss "missing critical file: Makefile"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 36 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~6d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/ossrs/srs"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

SRS is a high-performance, real-time media server written in C++ that ingests streams via RTMP, WebRTC, SRT, and GB28181, then delivers them over HLS, HTTP-FLV, HTTP-TS, and MPEG-DASH. It transcodes and relays H.264/H.265/AV1/VP9 video and AAC/Opus/G.711 audio, solving the need for a lightweight, low-latency streaming backbone that runs on Linux/macOS across X86_64, ARM, RISC-V, and LoongArch architectures. Monolithic C++ application: src/ contains core RTMP/WebRTC/HLS protocol handlers; trunk/src/ organizes protocol modules (rtmp/, http-flv/, webrtc/, srt/) alongside codec pipelines; build system is CMake-based with shell scripts (trunk/configure, trunk/build.sh) for platform-specific compilation; Go utilities in srsx/ (Redis client) provide auxiliary services; Docker and shell scripts in the root enable rapid deployment.

👥Who it's for

Streaming platform engineers and DevOps teams deploying live video infrastructure (RTMP ingest for broadcasters, HLS delivery for web/mobile, WebRTC for low-latency interaction), as well as embedded systems developers targeting edge devices. Contributors include maintainers of protocol implementations and real-time systems.

🌱Maturity & risk

Production-ready and actively maintained. The project is at v7.0 (Kai release), has substantial test coverage via CI/CD workflows (.github/workflows/test.yml, codeql-analysis.yml), receives regular commits, and is deployed widely in Docker environments. The large C++ codebase (~10.3M LOC) and established release pipeline indicate mature stability.

Low-to-moderate risk for production use, but with caveats: the project is heavily monolithic (~10.3M C++ lines in src/) making refactoring risky; dependency management relies on custom build system (build scripts in trunk/), and some third-party codec libraries (not all OSS) require careful licensing review. No obvious signs of abandonment, but single-language C++ dominance means Go/Python tooling is minimal and secondary.

Active areas of work

Active development on v7.0 (Kai) with focus on WebRTC interoperability, codec support expansion (AV1, VP9), and edge deployment. CI/CD workflows are green (test.yml, release.yml) and CodeQL analysis is enabled, suggesting ongoing security hardening. The codebase shows no stale branches or dormant issue tracker; changelogs and releases are published regularly.

🚀Get running

git clone https://github.com/ossrs/srs.git
cd srs
trunk/configure
make
sudo make install
# Or use Docker:
docker run --rm -it -p 1935:1935 -p 1985:1985 -p 8080:8080 -p 8000:8000/udp -p 10080:10080/udp ossrs/srs:6

Then verify at http://localhost:8080/ and stream via ffmpeg -re -i doc/source.flv -c copy -f flv rtmp://localhost/live/livestream.

Daily commands:

# Development build and run:
trunk/configure --with-debug
make
./objs/srs -c conf/srs.conf

# Docker (fastest for testing):
docker run -it -p 1935:1935 -p 8080:8080 ossrs/srs:6

# Check logs:
tail -f ./objs/srs.log

Default config is conf/srs.conf; listen ports are RTMP 1935, HTTP 8080, WebRTC 8000/UDP, SRT 10080/UDP.

🗺️Map of the codebase

  • cmd/proxy/main.go — Entry point for the SRS proxy component; orchestrates load balancing, cluster routing, and Redis integration for distributed streaming.
  • internal/bootstrap/proxy.go — Bootstrap logic for proxy initialization; sets up server listeners, routing tables, and cluster state management.
  • internal/bootstrap/bootstrap.go — Core bootstrap orchestration; initializes configuration, logging, and runtime environment for all SRS services.
  • go.mod — Dependency manifest defining Go version (1.25.0) and critical external dependencies like Redis client; essential for reproducible builds.
  • Makefile — Build system driver; defines compilation targets, test commands, and deployment workflows for the entire SRS ecosystem.
  • README.md — Project overview and architecture high-level guide; explains RTMP/WebRTC/HLS/SRT protocol support and use-case scope.
  • .openclaw/memory/srs-codebase-map.md — Internal architecture documentation; maps core modules, coroutine patterns, and streaming pipeline components.

🧩Components & responsibilities

  • Proxy Router (internal/bootstrap/proxy.go) (Go goroutines, Redis client, protocol handlers (RTMP/HLS/DASH/SRT)) — Dispatches incoming streams to origin servers based on Redis-cached routing rules; handles protocol translation and load balancing.
    • Failure mode: If Redis is down, routing table unavailable →

🛠️How to make changes

Add a new load-balancing strategy for proxy cluster

  1. Define strategy logic in a new file under internal/proxy/ (e.g., internal/proxy/strategy_round_robin.go) (internal/proxy/strategy_round_robin.go)
  2. Register strategy in the proxy bootstrap initialization (internal/bootstrap/proxy.go)
  3. Add integration tests using the test pattern in cmd/proxy/main_test.go with mocked origin servers (cmd/proxy/main_test.go)
  4. Document the strategy in the proxy design doc (docs/proxy/proxy-load-balancer.md)

Add support for a new streaming protocol (e.g., RTSP ingestion)

  1. Create protocol handler in internal/protocol/rtsp.go with codec detection and transcoding setup (internal/protocol/rtsp.go)
  2. Register protocol in bootstrap's protocol factory (internal/bootstrap/bootstrap.go)
  3. Wire protocol handler into proxy's request routing logic (internal/bootstrap/proxy.go)
  4. Add end-to-end test script in .openclaw/skills/srs-develop/scripts/ (e.g., proxy-e2e-rtsp-test.sh) (.openclaw/skills/srs-develop/scripts/proxy-e2e-rtsp-test.sh)
  5. Update README.md protocol support matrix (README.md)

Integrate a new Redis-based service (e.g., distributed session store)

  1. Create Redis client wrapper in internal/redis/session_store.go with context-aware commands (internal/redis/session_store.go)
  2. Add environment variable configuration support in internal/env/ (internal/env/redis.go)
  3. Wire client initialization in bootstrap and expose via dependency injection (internal/bootstrap/bootstrap.go)
  4. Add mocked Redis tests using counterfeiter in cmd/proxy/main_test.go (cmd/proxy/main_test.go)

🔧Why these technologies

  • Go 1.25.0 — High-performance concurrent I/O for multi-protocol streaming; lightweight goroutines handle thousands of simultaneous connections without thread overhead.
  • Redis (go-redis/v8) — Distributed state store for cluster-aware routing tables and origin health state; enables horizontal scaling across multiple proxy instances.
  • Docker — Containerized deployment model; isolates media server from host OS, simplifies cluster orchestration and updates.
  • Counterfeiter (mocking framework) — Interface-based test doubles; allows testing proxy logic in isolation from real origin servers and Redis without integration overhead.

⚖️Trade-offs already made

  • Single proxy binary (cmd/proxy/main.go) vs. microservices

    • Why: Streaming has ultra-low-latency requirements; separate services would add network hops and complexity.
    • Consequence: Easier deployment and operational simplicity, but less granular scaling (must scale entire proxy rather than individual functions).
  • Redis for distributed state vs. embedded consensus (raft/etcd)

    • Why: Redis is operationally simpler and lower-latency for read-heavy workloads (routing lookups).
    • Consequence: Requires external Redis cluster; no built-in fault tolerance (operator must manage Redis HA).
  • Protocol polymorphism (RTMP/HLS/DASH/SRT/WebRTC) in single server

    • Why: Many clients need multi-protocol delivery from a single origin (e.g., RTMP ingest → HLS/DASH playback).
    • Consequence: Higher complexity per feature addition; codec support matrix must be maintained across all protocols.

🚫Non-goals (don't propose these)

  • Not a database or persistence layer; SRS is stateless (state in Redis, media in transit only).
  • Not a transcoding farm; focuses on protocol translation and routing (transcoding delegated to origin servers or external services).
  • Not a live editing or DVR system; does not buffer or replay media (use origin server DVR plugins for that).
  • Not a DRM/encryption service; proxies streams as-is without content protection (rely on HTTPS/TLS at transport layer).
  • Not multi-cloud or Kubernetes-native by default; primarily single-machine or Redis-clustered architecture.

🪤Traps & gotchas

  • Coroutine context: SRS uses custom coroutine scheduling (see srs_core.hpp SrsCoroutine). Code assuming OS-level threading will break; async/await patterns must respect the event loop in srs_core_error.cpp.
  • Config file is required: ./objs/srs -c conf/srs.conf; no built-in defaults for ports/protocols. Missing or malformed conf/srs.conf will fail silently with cryptic error codes.
  • Memory limits on ARM: The build system (trunk/configure) detects architecture; cross-compilation requires explicit --cross-arch flags or Docker. No automatic fallback for low-memory environments.
  • Port conflicts: Default RTMP 1935, HTTP 8080, WebRTC 8000/UDP, SRT 10080/UDP. If any are in use, the server will hang during listener bind without clear error output; check objs/srs.log carefully.
  • SSL/TLS setup: Enabling HTTPS (HLS, HTTP-FLV over HTTPS) requires cert/key paths in conf/srs.conf; self-signed certs must have SAN for WebRTC DTLS. Missing certs cause silent fallback to plain HTTP.
  • FFmpeg / codec libraries: Some optional codecs (AV1, VP9) depend on external ffmpeg installations not vendored; build will succeed but runtime transcode will fail. Run ./objs/srs -v to verify codec support.
  • Redis state sharing (clustering): srsx/ requires Go 1.25+ and redis-server running locally on 127.0.0.1:6379 by default. Clustering config in conf/srs.conf does not auto-start redis; you must provision it separately.
  • Deprecated RTMP/HLS versions: SRS targets RTMP spec v3 and HLS v7; older clients expecting non-standard extensions will fail. Check downstream player compatibility.
  • No embedded web UI in v7.0: HTTP console at :8080 is basic; real-time stats require query of internal state or WebRTC stats API, not a full admin panel.

🏗️Architecture

💡Concepts to learn

  • RTMP State Machine — SRS's core protocol handler for video ingest. Understanding RTMP handshake (connect, createStream, publish) is essential to troubleshoot client connectivity and implement new RTMP extensions.
  • Coroutine-based Async I/O — SRS does not use OS threads for concurrency; instead, it multiplexes thousands of connections via user-space coroutines (see srs_core.hpp SrsCoroutine). This is critical for understanding control flow and debugging deadlocks.
  • HLS (HTTP Live Streaming) Segmentation — SRS converts live RTMP streams into HLS playlists and TS segments; understanding segment duration, bitrate ladder, and playlist refresh is essential for tuning latency vs. reliability tradeoffs.
  • WebRTC DTLS-SRTP — SRS's low-latency path uses DTLS for key exchange and SRTP for encrypted RTP media. Debugging WebRTC failures requires understanding certificate validation, ICE candidates, and SRTP cipher negotiation.
  • nginx/nginx-rtmp-module — Predecessor RTMP streaming module for Nginx; SRS diverged to create a standalone, multi-protocol server with better WebRTC/HLS support
  • arut/nginx-rtmp-module — Another RTMP-focused extension; shows the RTMP protocol baseline that SRS implements and extends
  • monktan/monkeyking — Lightweight Python RTMP server; demonstrates alternative implementation of RTMP state machine for reference
  • Haivision/srt — SRT (Secure Reliable Transport) protocol reference implementation; SRS depends on libsrt for SRT ingest support
  • pion/webrtc — Go WebRTC library; relates to SRS's WebRTC stack and the auxiliary services in srsx/ that may leverage Pion for clustering

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive unit tests for SRS coroutine scheduling layer

The repo has detailed coroutine documentation in .openclaw/memory/srs-coroutines.md but lacks visible unit test coverage for the coroutine scheduler. SRS is a real-time media server where scheduling correctness is critical for stream processing. Adding tests would prevent regressions in timing-sensitive code paths and help new contributors understand the coroutine model.

  • [ ] Review existing coroutine implementation in src/core/srs_core_coroutine.cpp and related headers
  • [ ] Create new test file tests/unit/coroutine_test.cpp covering context switching, yield/resume cycles, and scheduler fairness
  • [ ] Add test cases for edge cases: nested coroutines, exception handling in coroutines, and stack overflow scenarios
  • [ ] Integrate tests into .github/workflows/test.yml CI pipeline
  • [ ] Document coroutine testing patterns in .openclaw/skills/srs-develop/SKILL.md

Add Go integration tests for Redis cluster failover in srsx module

The go.mod declares github.com/go-redis/redis/v8 dependency and .openclaw/skills/srs-develop/scripts includes proxy-e2e-redis-test.sh, indicating Redis integration is critical. However, there's no visible Go test coverage for Redis connection pooling, failover scenarios, or cluster mode. This is high-value for production reliability.

  • [ ] Create tests/srsx/redis_integration_test.go for Redis client initialization and connection validation
  • [ ] Add test cases for: single node failure, cluster rebalancing, command timeout, and connection pool exhaustion
  • [ ] Use docker-compose or testcontainers pattern for ephemeral Redis instances in tests
  • [ ] Add test target to Makefile or CMakeLists.txt that runs only when REDIS_TEST=1
  • [ ] Document Redis testing setup in .openclaw/skills/srs-develop/SKILL.md

Add WebRTC codec negotiation tests for H.265/AV1 support verification

The repo description highlights H.265, AV1, VP9 codec support, but codec negotiation in WebRTC is complex (SDP parsing, RTCP feedback, bitrate adaptation). The .github/workflows/test.yml workflow exists but likely doesn't have dedicated codec-specific test scenarios. Adding tests ensures new codec support doesn't break existing streams.

  • [ ] Audit src/rtc/srs_rtc_codec.cpp and SDP handling code for current codec support
  • [ ] Create tests/unit/webrtc_codec_negotiation_test.cpp with test cases for: H.264↔H.265 muxing, AV1 profile validation, and fallback codec selection
  • [ ] Add integration test in tests/e2e/ that publishes WebRTC streams with each codec and verifies decoder compatibility
  • [ ] Create GitHub Actions workflow .github/workflows/codec-compat-matrix.yml testing codec combinations across platforms
  • [ ] Document supported codec combinations in docs/CODEC_SUPPORT.md

🌿Good first issues

  • Add unit test coverage for srs_app_config.cpp: The configuration parsing module (src/core/srs_core_config.cpp) has minimal test cases in trunk/src/utest/. Writing tests for edge cases (malformed conf, missing required fields, type coercion) would improve reliability and serve as learning material for the codebase.
  • Document WebRTC DTLS handshake flow in comments: src/app/srs_app_webrtc.cpp performs DTLS negotiation but lacks detailed inline comments explaining state transitions and error handling. Adding clarifying comments (no code change) would help future contributors understand the protocol implementation.
  • Extend integration test suite in .github/workflows/test.yml: The CI currently runs unit tests and CodeQL; adding a stage that spins up the server, sends RTMP/HLS/WebRTC test streams, and verifies output (e.g., with ffprobe) would catch regressions early. Start by wrapping existing trunk/src/utest/ tests in a Docker container stage.
  • Create a contrib/examples/ directory with annotated config files: Users often struggle with conf/srs.conf tuning for specific use cases (low-latency WebRTC, high-throughput HLS, transcoding). Contributing example configs with comments (rtmp2hls.conf, webrtc-lowlatency.conf) and matching shell scripts to test them would reduce friction.
  • Add platform-specific build troubleshooting guide: The trunk/configure script detects architecture but error messages are terse. A docs/BUILD_TROUBLESHOOTING.md file listing common failures (OpenSSL not found, ARM cross-compile flags, libc versions) with solutions would save junior contributors hours.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 3663a8e — Proxy: Refactor server APIs and expand RTMP test coverage. v7.0.147 (#4672) (winlinvip)
  • d869643 — Proxy: Refine logger and environment APIs. v7.0.146 (#4670) (winlinvip)
  • 30fc777 — Proxy: Modernize internal packages on stdlib and add unit tests. v7.0.145 (#4667) (winlinvip)
  • cd11a67 — Proxy: Harden internal/env tests and add counterfeiter fakes. v7.0.144 (#4665) (winlinvip)
  • 460412c — Move build output to bin/, replace godotenv with custom .env parser, and update docs. v7.0.143 (#4661) (winlinvip)
  • 7c17c93 — Refactor bootstrap for multi-server support and add srs-develop skill. v7.0.142 (#4657) (winlinvip)
  • 5f47cee — OpenClaw: Improve SRS support skill with docs integration, troubleshooting, and workspace updates (#4655) (winlinvip)
  • 8a53cb5 — OpenClaw: Restructure workspace with symlinks, add codebase map, and rewrite AI docs. v7.0.141 (#4654) (winlinvip)
  • ada9396 — OpenClaw: Rename openclaw/ to .openclaw/ and update all symlinks and paths (winlinvip)
  • c741943 — Rename openclaw workspace name. (winlinvip)

🔒Security observations

  • High · Outdated Redis Client Dependency — go.mod - github.com/go-redis/redis/v8 v8.11.5. The go-redis/redis dependency v8.11.5 is significantly outdated. Current stable versions are v9.x. This older version may contain known security vulnerabilities and lacks security patches available in newer releases. Fix: Update to the latest stable version of go-redis (v9.x or later). Review the changelog for security-related fixes and breaking changes before upgrading.
  • High · Permissive Docker Build Dependencies — Dockerfile - RUN apt-get install line. The Dockerfile installs build tools (gcc, make, g++, patch, unzip, perl, git) without version pinning or integrity verification. This increases the attack surface and makes builds non-deterministic and potentially vulnerable to supply chain attacks. Fix: Pin specific versions of all packages (e.g., gcc=10-, make=4.3-). Use SHA256 checksums to verify package integrity. Consider using a minimal base image and multi-stage builds to reduce final image size and attack surface.
  • High · Exposed Build Arguments in Docker — Dockerfile - RUN echo statement with ARG variables. The Dockerfile uses ARG for CONFARGS, MAKEARGS, and other build parameters that are echoed in RUN statements. These values could leak sensitive configuration information in build logs and Docker image history. Fix: Avoid echoing sensitive build arguments. Use BuildKit secrets for sensitive build-time configuration. Mark non-sensitive ARGs as --mount=type=cache instead of printing them.
  • Medium · Missing DEBIAN_FRONTEND Persistence — Dockerfile - ENV DEBIAN_FRONTEND declaration. While DEBIAN_FRONTEND=noninteractive is set, there's no verification that subsequent package installations respect this setting. Interactive prompts could cause build failures or security issues. Fix: Add apt-get configuration flags: -y and use apt-get install -y consistently. Consider adding RUN echo 'APT::Acquire::Retries "3";' >> /etc/apt/apt.conf.d/80-retries for resilience.
  • Medium · Missing Security Headers in Configuration — Configuration files (not visible in partial structure). No evidence of security headers configuration (CSP, X-Frame-Options, X-Content-Type-Options) in the visible codebase. For a media server handling web requests, this could expose users to XSS or clickjacking attacks. Fix: Implement comprehensive security headers for all HTTP responses. Document and test security headers in CI/CD pipelines.
  • Medium · Unverified Shell Commands in Dockerfile — Dockerfile - SHELL declaration and RUN if statements. The Dockerfile uses /bin/bash -c without set -e, causing potential silent failures in chained commands. Additionally, conditional logic using [[ could fail on Alpine-based images. Fix: Add set -e to ensure failures stop the build. Use POSIX-compliant syntax or ensure bash is available. Test Dockerfile with multiple base images.
  • Low · Potential Hardcoded Configuration Paths — Root-level configuration directories (.claude/*, .codex/*, .kiro/*, .openclaw/*). The codebase includes multiple configuration and memory directories (.claude, .codex, .kiro, .openclaw) that may contain development-time secrets or sensitive configuration data. Fix: Ensure all configuration directories are excluded from production builds (.dockerignore, .gitignore). Verify no credentials are stored in MEMORY.md or configuration files. Use environment variables for sensitive configuration.
  • Low · Missing .dockerignore File Specification — Dockerfile - COPY . /srs instruction. No explicit .dockerignore file content visible. The Dockerfile copies all context (COPY . /srs) without filtering, which could include development tools, test files, and unnecessary artifacts. Fix: Create a comprehensive .dockerignore file excluding: .git, .github, .claude*, .codex*, .kiro*, .openclaw*, node_modules, test files, documentation, and build artifacts.
  • Low · Transitive Dependency Management — undefined. undefined Fix: undefined

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · ossrs/srs — RepoPilot