Arthur-Ficial/apfel
The free AI already on your Mac. CLI tool, OpenAI-compatible server, and interactive chat — all on-device via Apple Intelligence. No API keys, no cloud, no downloads.
Healthy across all four use cases
Permissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 4d ago
- ✓4 active contributors
- ✓MIT licensed
Show 4 more →Show less
- ✓CI configured
- ✓Tests present
- ⚠Small team — 4 contributors active in recent commits
- ⚠Single-maintainer risk — top contributor 97% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/arthur-ficial/apfel)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/arthur-ficial/apfel on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: Arthur-Ficial/apfel
Generated by RepoPilot · 2026-05-10 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/Arthur-Ficial/apfel shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- Last commit 4d ago
- 4 active contributors
- MIT licensed
- CI configured
- Tests present
- ⚠ Small team — 4 contributors active in recent commits
- ⚠ Single-maintainer risk — top contributor 97% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live Arthur-Ficial/apfel
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/Arthur-Ficial/apfel.
What it runs against: a local clone of Arthur-Ficial/apfel — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in Arthur-Ficial/apfel | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 34 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of Arthur-Ficial/apfel. If you don't
# have one yet, run these first:
#
# git clone https://github.com/Arthur-Ficial/apfel.git
# cd apfel
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of Arthur-Ficial/apfel and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "Arthur-Ficial/apfel(\\.git)?\\b" \\
&& ok "origin remote is Arthur-Ficial/apfel" \\
|| miss "origin remote is not Arthur-Ficial/apfel (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "Sources/main.swift" \\
&& ok "Sources/main.swift" \\
|| miss "missing critical file: Sources/main.swift"
test -f "Sources/Server.swift" \\
&& ok "Sources/Server.swift" \\
|| miss "missing critical file: Sources/Server.swift"
test -f "Sources/Core/ApfelCore.docc/ApfelCore.md" \\
&& ok "Sources/Core/ApfelCore.docc/ApfelCore.md" \\
|| miss "missing critical file: Sources/Core/ApfelCore.docc/ApfelCore.md"
test -f "Sources/Session.swift" \\
&& ok "Sources/Session.swift" \\
|| miss "missing critical file: Sources/Session.swift"
test -f "Sources/Core/MCPProtocol.swift" \\
&& ok "Sources/Core/MCPProtocol.swift" \\
|| miss "missing critical file: Sources/Core/MCPProtocol.swift"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 34 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~4d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/Arthur-Ficial/apfel"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
apfel is a Swift CLI tool and OpenAI-compatible HTTP server that exposes Apple's built-in FoundationModels LLM (via Apple Intelligence) for on-device inference on Apple Silicon Macs. It provides a UNIX-friendly interface with streaming output, tool calling, file attachments, and JSON formatting—all without API keys, cloud dependence, or external downloads. The core innovation is turning macOS 26 Tahoe's native ML capabilities into a portable, standards-compliant AI backend. Monolithic Swift package (Sources/) with CLI entry (CLI.swift, CLI/ folder for subcommands and helpers), core logic in ContextManager.swift and Core/, server mode in separate modules, plus Python tooling in Sources/. Examples/ directory has runnable demo code (ContextStrategies, ErrorHandling, ToolCalling, MCPProtocol). Build system via Makefile; documentation in CLAUDE.md and docs/ subdirectories. .claude/routines/ shows Claude integration patterns.
👥Who it's for
macOS developers and power users with Apple Silicon Macs who want local AI inference for CLI scripting, prompt engineering, or integrating OpenAI-compatible workflows without cloud costs or latency. Also attracts developers migrating from Ollama/LM Studio who prefer native integration over external services. Contributors include Swift generalists interested in Apple's ML frameworks and UNIX tool design.
🌱Maturity & risk
Actively maintained and production-ready (v1.3.3 released, CI/CD via GitHub Actions, proper semantic versioning). Swift 6.3+ requirement and macOS 26 Tahoe dependency show it's designed for current Apple ecosystems. Has structured examples, documented CLI, and a defined release process. Not a weekend project, but tied to a specific OS version's capabilities.
Single maintainer (Arthur-Ficial org structure) creates sustainability risk. Tight coupling to Apple FoundationModels framework means no inference if you upgrade past supported macOS or switch hardware (Intel Macs unsupported). Pre-1.0 potential for breaking changes across OS updates. Minimal third-party dependencies reduces supply-chain risk but limits extensibility. Small GitHub stars suggest limited production adoption yet.
Active areas of work
Version 1.3.3 stable release with active CI via .github/workflows/ci.yml and publish-release.yml. Changelog and version tracking (.version file) indicate regular updates. .claude/routines/ (golden-goal, issue-triage, bug-solver, dist-channel-watch) show AI-assisted workflow setup. Examples directory suggests ongoing work on tool calling and context strategies.
🚀Get running
git clone https://github.com/Arthur-Ficial/apfel.git && cd apfel
make install
# Or: brew install apfel
Requires macOS 26 Tahoe+ with Apple Silicon, Command Line Tools (no Xcode), and Swift 6.3+. See docs/install.md for Nix/Mint/mise alternatives.
Daily commands:
# CLI mode (one-shot)
apfel "What is 2+2?"
# Stream output
apfel --stream "Write a haiku"
# Interactive chat
apfel --chat
# Server mode (OpenAI-compatible)
apfel --serve
# Listens at http://localhost:11434/v1
# Pipe input
echo "Translate to German: hello" | apfel
# File attachments
apfel -f file.txt "Summarize this"
🗺️Map of the codebase
Sources/main.swift— Entry point for the CLI and server; orchestrates argument parsing, mode selection, and subprocess management — essential to understand how requests flow through the system.Sources/Server.swift— HTTP server implementation exposing the OpenAI-compatible API; handles routing, request validation, and response streaming — core to the public interface.Sources/Core/ApfelCore.docc/ApfelCore.md— High-level documentation of the framework's public API and architecture; required reading for contributors integrating new features.Sources/Session.swift— Manages chat session state, context windows, and model interaction — critical for understanding stateful request handling.Sources/Core/MCPProtocol.swift— Implements the Model Context Protocol for tool/extension discovery; defines how external capabilities are integrated.Sources/CLI.swift— CLI mode implementation including argument validation, output formatting, and single-request execution logic.Sources/Core/ApfelError.swift— Unified error type and handling strategy; every error path traces through this, making it essential for debugging and extension.
🛠️How to make changes
Add a new OpenAI-compatible endpoint
- Define the request/response types in Sources/Models.swift following OpenAI schema conventions (
Sources/Models.swift) - Create a handler function in Sources/Handlers.swift that validates input and calls the appropriate session method (
Sources/Handlers.swift) - Register the route in Sources/Server.swift by adding a case in the request router (
Sources/Server.swift) - Add unit tests in Tests/apfelTests/ to verify request validation and response format (
Tests/apfelTests/CLIServerParityTests.swift)
Add a new tool/capability via MCP
- Define the tool schema as a JSON object; include name, description, and parameters (
Sources/Core/SchemaParser.swift) - Implement a tool handler in Sources/Core/ToolCallHandler.swift that receives parsed arguments and returns results (
Sources/Core/ToolCallHandler.swift) - Register the tool in the MCP protocol discovery by updating Sources/Core/MCPProtocol.swift (
Sources/Core/MCPProtocol.swift) - Add tests in Tests/apfelTests/ to verify schema parsing and tool invocation (
Examples/ToolCalling/main.swift)
Modify request validation or context strategy
- Update Sources/Core/ChatRequestValidator.swift to add new validation rules or constraints (
Sources/Core/ChatRequestValidator.swift) - If changing context window behavior, modify Sources/ContextManager.swift and Sources/Core/ContextStrategy.swift (
Sources/ContextManager.swift) - Update Sources/Core/Chat/BodyLimits.swift if token/byte limits change (
Sources/Core/Chat/BodyLimits.swift) - Add regression tests in Tests/apfelTests/ContextStrategyTests.swift (
Tests/apfelTests/ContextStrategyTests.swift)
Add a new CLI flag or mode
- Add the flag definition to the CLIArguments struct in Sources/CLI/CLIArguments.swift (
Sources/CLI/CLIArguments.swift) - Implement the business logic in Sources/CLI.swift or Sources/main.swift (
Sources/CLI.swift) - If the flag affects error handling, update Sources/CLI/CLIErrors.swift (
Sources/CLI/CLIErrors.swift) - Add comprehensive tests in Tests/apfelTests/CLIArgumentsTests.swift (
Tests/apfelTests/CLIArgumentsTests.swift)
🔧Why these technologies
- Swift 6.3+ with async/await — Native macOS concurrency model; seamless integration with Apple FoundationModels APIs; compile-time safety for concurrent streams.
- Apple FoundationModels framework — Provides on-device LLM inference on Apple Silicon; no external API calls, network dependencies, or licensing; hardware-accelerated via Neural Engine.
- OpenAI API compatibility — Allows drop-in replacement of cloud LLMs with local inference; maximizes client ecosystem reuse (SDKs, tools, integrations).
- Model Context Protocol (MCP) — Standardized tool/capability extensibility; enables external services and functions without modifying core code.
- Server-sent events (SSE) for streaming — Standard HTTP streaming for chunked LLM responses; low overhead, browser-compatible, aligns with OpenAI API convention.
⚖️Trade-offs already made
-
macOS 26+ Tahoe only (Apple Silicon)
- Why: Exclusive reliance on Apple FoundationModels; no need to maintain cross-platform backends.
- Consequence: Complete unavailability on Linux, Windows, Intel Macs, or older macOS versions; simplifies distribution and testing.
-
Single-process CLI mode vs. daemonized server
- Why: Simpler deployment for one-off requests; avoids background process management complexity.
- Consequence: Each CLI invocation starts a fresh process; no persistent in-memory cache across calls; network server mode trades startup latency for reusability.
-
Context management via context_strategy parameter
- Why: Flexible context windowing (sliding window, summarization) without hardcoded limits.
- Consequence: Adds complexity to token tracking; requires users to understand strategy trade-offs; easy to exceed available context if misconfigured.
-
No authentication/API keys in local-only mode
- Why: Removes attack surface and operational overhead for single-user
- Consequence: undefined
🪤Traps & gotchas
macOS version lock: Only works on macOS 26 Tahoe+; no backward compatibility. If you're on Sonoma or Ventura, this won't run. Apple Intelligence requirement: Must be enabled in System Settings; silent failure if disabled. Token context: Hard 4096-token limit (see README); long prompts will be truncated without warning. Readline mode: CReadline shim requires system readline/libedit; may behave differently across macOS versions. No offline fallback: Entirely dependent on FoundationModels availability—no graceful degradation if Apple's ML stack is unavailable. Model selection: No ability to choose between different LLMs; uses whatever Apple's default is.
🏗️Architecture
💡Concepts to learn
- Apple FoundationModels Framework — The core dependency that exposes on-device LLM inference; understanding its API, constraints (4096 tokens), and availability (macOS 26 Tahoe only) is critical to hacking on apfel
- OpenAI API Compatibility Layer — apfel sells itself as a drop-in replacement for OpenAI's API; understanding how it translates requests/responses between native Apple calls and OpenAI semantics is key to server mode and integration tests
- Token Context Window Management — The 4096-token limit and ContextManager's role in tracking session history directly impacts prompt design and error handling; exceeding it silently truncates input
- Tool Calling / Function Calling — Featured prominently (Examples/ToolCalling/) as a capability; allows the LLM to request external actions; understanding the schema and execution loop is needed for advanced integrations
- Model Context Protocol (MCP) — Examples/MCPProtocol/main.swift shows integration; MCP is an emerging standard for tool/agent communication; relevant if extending apfel with agent capabilities
- Streaming (Server-Sent Events) — README flags
--streamand--servemodes; understanding chunked HTTP responses and async I/O is needed for implementing or debugging streaming features - UNIX Tool Philosophy (stdin/stdout/pipes) — Core design principle (echo piping, -f flags for files); understanding composability and exit codes (ExitCodes.swift) is essential for CLI reliability and shell integration
🔗Related repos
ollama/ollama— Direct competitor providing local LLM inference via downloaded models; shares same use-case but runs cross-platform and supports model downloadslm-sys/FastChat— OpenAI-compatible local server framework; apfel implements similar API spec but uses Apple Silicon native inference instead of VLLMjmorganca/ollama— Earlier local LLM server; apfel's design (--serve with http://localhost:11434/v1) mirrors Ollama's API port and endpoint structureapple/swift-foundation— Companion repo for understanding Swift 6.3+ Foundation updates that apfel depends onapple/swift-evolution— Track accepted Swift Evolution Proposals affecting language features used in this codebase
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive unit tests for Core/Chat module components
The Sources/Core/Chat directory contains critical components (BodyLimits.swift, FinishReasonResolver.swift, StreamOutcome.swift, ToolResolution.swift) that handle chat request/response logic, but there are no visible test files in the repo. These modules directly impact OpenAI-compatible API behavior and deserve dedicated test coverage for edge cases, malformed inputs, and stream handling.
- [ ] Create Tests/Core/Chat/BodyLimitsTests.swift with tests for request size validation boundaries
- [ ] Create Tests/Core/Chat/FinishReasonResolverTests.swift with tests for various completion scenarios
- [ ] Create Tests/Core/Chat/ToolResolutionTests.swift covering tool calling edge cases and missing/malformed tool definitions
- [ ] Create Tests/Core/Chat/StreamOutcomeTests.swift for stream state transitions and cleanup verification
- [ ] Add test target to Package.swift and verify CI picks up new tests
Add GitHub Actions workflow for Swift documentation generation and validation
The repo has extensive DocC documentation (Sources/Core/ApfelCore.docc/ with .md files for ContextStrategies, ErrorHandling, GettingStarted, etc.), but there's no CI workflow validating that these docs build correctly or that code examples compile. This is critical for a library used by other developers.
- [ ] Create .github/workflows/docs-build.yml workflow that runs
swift package generate-documentation - [ ] Add validation step to ensure all DocC links are valid (broken links checker)
- [ ] Add step to compile all Examples/* (ContextStrategies, ErrorHandling, MCPProtocol, ToolCalling) as part of docs CI
- [ ] Post documentation build artifacts or warnings on PRs to catch doc regressions early
Add integration tests for CLI argument parsing and error handling
Sources/CLI/ contains CLIArguments.swift, CLIErrors.swift, and ExitCodes.swift, indicating sophisticated CLI handling. However, there are no visible integration tests verifying that different argument combinations produce expected outputs and exit codes. This is especially important for a user-facing CLI tool.
- [ ] Create Tests/CLI/CLIArgumentsTests.swift testing valid/invalid argument combinations from CLIArguments.swift
- [ ] Create Tests/CLI/CLIErrorsTests.swift verifying each CLIErrors case maps to correct ExitCode from ExitCodes.swift
- [ ] Create Tests/CLI/CLIIntegrationTests.swift with subprocess calls to the actual CLI binary testing real workflows
- [ ] Add OutputFormat.swift test coverage for JSON/text output formatting consistency
- [ ] Ensure tests reference Sources/CLI.swift entry point to catch regressions in command dispatch
🌿Good first issues
- Add unit tests for
Sources/CLI/CLIArguments.swiftargument parsing (test edge cases like-f file1 -f file2stacking, invalid flags, help text). Currently no test files visible in Sources/. - Document the exact OpenAI API compatibility layer in Core/ (which endpoints are supported, which fields are translated, which are ignored) in a new
docs/openai-compatibility.md. Currently unclear what subset of OpenAI spec is implemented. - Add a
--context-statsflag to show token usage, context window remaining, and conversation history size. ContextManager.swift likely has this data but there's no CLI output for it.
⭐Top contributors
Click to expand
Top contributors
- @Arthur-Ficial — 97 commits
- @franzenzenhofer — 1 commits
- @dependabot[bot] — 1 commits
- @github-actions[bot] — 1 commits
📝Recent commits
Click to expand
Recent commits
2abf318— docs(nixpkgs): refresh CLAUDE.md + dist-watch routine for new auto-bump (Arthur-Ficial)71ead2d— fix(nixpkgs-bump): faster clone + correct existing-PR detection (Arthur-Ficial)27538b8— feat(nixpkgs): auto-bump nixpkgs as final step of make release (Arthur-Ficial)e220225— docs(claude): refresh stale test counts in CLAUDE.md (Arthur-Ficial)77dd322— chore(nixpkgs): rip out auto-bump workflow, rely on r-ryantm + community (#139) (Arthur-Ficial)9ac3b2f— fix(nixpkgs-bump): set git identity in Sync step before workflow-preserve commit (Arthur-Ficial)85e50f4— fix(nixpkgs-bump): preserve fork workflow files so the PAT can push (Arthur-Ficial)a588170— fix(nixpkgs): rename apfel-ai -> apfel-llm everywhere (#137) (Arthur-Ficial)4a83555— docs(integrations/zed): cleaner screenshot, simpler caption (Arthur-Ficial)0d397b9— docs(integrations): add Zed agent panel guide with screenshot (Arthur-Ficial)
🔒Security observations
- High · Incomplete Security Policy Documentation —
SECURITY.md. The SECURITY.md file appears to be truncated mid-sentence in the 'Scope' section. The MCP tool execution security considerations are not fully documented, which could leave developers and users unaware of potential risks related to subprocess spawning and I/O handling. Fix: Complete the SECURITY.md documentation fully, especially detailing MCP tool execution risks, input validation requirements, and subprocess safety considerations. - Medium · HTTP Server Bound to Localhost Without Explicit Validation —
Sources/SecurityMiddleware.swift, Sources/OriginValidator.swift, Sources/Server.swift. The codebase includes an HTTP server component (referenced asapfel --serve) that provides an OpenAI-compatible API. While documented as localhost-only, the security implementation for origin/binding validation should be explicitly verified in SecurityMiddleware.swift and OriginValidator.swift. Fix: Ensure OriginValidator.swift explicitly validates that the server only binds to localhost (127.0.0.1 or ::1), implement CORS restrictions, and validate all incoming request origins. Add security headers for API responses. - Medium · MCP Subprocess Execution without Documented Sandboxing —
Sources/MCPClient.swift, Sources/Core/MCPProtocol.swift. The MCPClient.swift and MCP tool execution features spawn subprocesses. The SECURITY.md indicates this is a security surface but provides incomplete details. Without clear documentation of input validation and process isolation, there's risk of command injection or privilege escalation. Fix: Implement strict input validation for MCP tool parameters, use allowlisting for executable paths, avoid shell interpolation, and document the security model for subprocess execution. Consider using process sandboxing APIs available on macOS. - Medium · Tool Call Handler Requires Input Validation Review —
Sources/Core/Chat/ToolResolution.swift, Sources/Core/ToolCallHandler.swift. ToolCallHandler.swift and ToolResolution.swift handle dynamic tool invocation. Without visible input sanitization, there's potential for injection attacks through tool parameters or names. Fix: Implement strict schema validation for all tool parameters before execution, use allowlisting for tool names, and sanitize all user-supplied input before passing to subprocess or tool handlers. - Medium · Stream Processing with Potential Buffer Overflow or Memory Exhaustion —
Sources/Core/Concurrency/StreamTaskBox.swift, Sources/Core/BufferedLineReader.swift, Sources/Core/Concurrency/TraceBuffer.swift. StreamTaskBox.swift, BufferedLineReader.swift, and TraceBuffer.swift handle streaming data. Without visible size limits or memory bounds checking, large payloads could cause memory exhaustion or DoS conditions. Fix: Implement configurable size limits for buffers and streams, add memory usage monitoring, implement rate limiting for large requests, and add safeguards against unbounded buffering. - Low · Sensitive Information in Debug Output —
Sources/Core/DebugOutput.swift, Sources/Logging.swift. DebugOutput.swift and Logging.swift may output sensitive data (model responses, user input, API interactions) to logs. In production deployments, this could expose PII or sensitive information. Fix: Implement log level controls to disable verbose output in production, sanitize debug output to mask sensitive information, and document logging behavior in security guidelines. - Low · Version Information Disclosure —
.version, Sources/BuildInfo.swift. The.versionfile and version output viaapfel --versionexpose detailed version information, which could aid attackers in targeting known vulnerabilities. Fix: While version disclosure is generally low-risk for client tools, ensure all known vulnerabilities in current and recent versions are documented in SECURITY.md and users are encouraged to update. - Low · Missing Security Headers Documentation —
Sources/Server.swift, Sources/SecurityMiddleware.swift. No explicit documentation of security headers returned by the HTTP server API, such as X-Content-Type-Options, X-Frame-Options, or Content-Security-Policy. Fix: Add comprehensive security headers to all HTTP responses, document them in SECURITY.md, and validate them in test suites.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.