MacPaw/OpenAI
Swift community driven package for OpenAI public API
Healthy across the board
Permissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 1w ago
- ✓14 active contributors
- ✓MIT licensed
Show 3 more →Show less
- ✓CI configured
- ✓Tests present
- ⚠Concentrated ownership — top contributor handles 60% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/macpaw/openai)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/macpaw/openai on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: MacPaw/OpenAI
Generated by RepoPilot · 2026-05-10 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/MacPaw/OpenAI shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit 1w ago
- 14 active contributors
- MIT licensed
- CI configured
- Tests present
- ⚠ Concentrated ownership — top contributor handles 60% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live MacPaw/OpenAI
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/MacPaw/OpenAI.
What it runs against: a local clone of MacPaw/OpenAI — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in MacPaw/OpenAI | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | Last commit ≤ 40 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of MacPaw/OpenAI. If you don't
# have one yet, run these first:
#
# git clone https://github.com/MacPaw/OpenAI.git
# cd OpenAI
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of MacPaw/OpenAI and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "MacPaw/OpenAI(\\.git)?\\b" \\
&& ok "origin remote is MacPaw/OpenAI" \\
|| miss "origin remote is not MacPaw/OpenAI (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 40 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~10d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/MacPaw/OpenAI"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
MacPaw/OpenAI is a Swift native package providing type-safe bindings to the OpenAI API (chat completions, embeddings, image generation, audio transcription/synthesis, assistants, and files). It bridges the gap between OpenAI's REST API and Swift developers by offering structured models, async/await support, and streaming capabilities without raw HTTP boilerplate. The package also supports other AI providers (Gemini, DeepSeek, Perplexity) via provider abstraction. Hybrid package + demo structure: the root contains the Swift package definition (Package.swift) with API client logic, while /Demo/Demo is a macOS SwiftUI reference app (Xcode project) and /Demo/DemoChat is a standalone Swift executable package with AssistantStore, ChatStore, and associated middleware. Sources are organized by OpenAI capability (chat, images, audio, assistants, embeddings) rather than by architectural layer.
👥Who it's for
iOS/macOS developers building AI-powered applications who need a battle-tested Swift SDK for OpenAI, including SwiftUI app developers working on the Demo chat application and package maintainers contributing to the community-driven implementation.
🌱Maturity & risk
Actively developed and production-ready. The repo has comprehensive CI/CD via GitHub Actions (swift.yml, codeql.yml), a working demo app in /Demo, both a SwiftUI demo and a DemoChat executable example, and follows semantic versioning on Swift Package Index. Indicates regular maintenance with multiple workflows and structured issue templates suggesting an established project.
Low risk for established use cases; moderate risk for cutting-edge features. The reliance on OpenAI's evolving API means breaking changes may cascade into releases (Assistants are still in Beta per README). Dependency on Swift Package Manager may create resolution friction for large monorepos. Single repository structure (not split into core/extras) means SDK bloat isn't addressed by optional dependencies, and the community-driven nature means release velocity depends on MacPaw's availability.
Active areas of work
The repo actively maintains Assistants API support (threads, runs, file uploads), recently added MCP (Model Context Protocol) tool integration per README, and monitors emerging OpenAI features. Dependabot is configured (.github/dependabot.yml) suggesting automated dependency updates. The presence of /Demo/DemoChat with modern patterns (LoggingMiddleware.swift, MCPToolsStore.swift) indicates ongoing feature parity with OpenAI's latest offerings.
🚀Get running
- Clone:
git clone https://github.com/MacPaw/OpenAI.git && cd OpenAI. 2. Inspect the Swift package:cat Package.swift. 3. Build:swift build. 4. Run demo (requires Xcode):open Demo/Demo.xcodeprojor run DemoChat:cd Demo/DemoChat && swift run. 5. Set OpenAI API key in Demo app via APIKeyModalView.swift.
Daily commands:
For macOS demo: open Demo/Demo.xcodeproj && press Run in Xcode. For DemoChat CLI: cd Demo/DemoChat && swift run DemoChat. Both require OPENAI_API_KEY environment variable or interactive setup (APIKeyModalView for GUI, DescribedError.swift for errors).
🗺️Map of the codebase
- Sources/Public: Contains the public API client interface and core request/response models for all OpenAI endpoints (chat, images, embeddings, assistants)
- Demo/Demo.xcodeproj/project.pbxproj: The SwiftUI reference application demonstrating practical usage of the SDK with API key management and streaming responses
- Demo/DemoChat/Sources/ChatStore.swift: State management layer for the DemoChat executable showing how to integrate assistants, threads, and runs into a conversational app
- Demo/DemoChat/Sources/MCPToolsStore.swift: Demonstrates MCP (Model Context Protocol) tool integration, a cutting-edge feature for function-calling workflows
- .github/workflows/swift.yml: Defines the CI/CD pipeline for building, testing, and validating the package on every push and PR
- README.md: Primary documentation with organized sections for each OpenAI capability (chat, functions, tools, images, audio, assistants, providers)
- CONTRIBUTING.md: Guidelines for external contributors on code style, testing expectations, and PR submission process
🛠️How to make changes
- Core API models: edit /Sources/{feature} directories (e.g., Sources/Public/Models for Codable structs). 2. Add new endpoint: mirror OpenAI docs, create Request/Response types, add method to main OpenAI client. 3. Modify demo UI: SwiftUIAdditions.swift and DemoApp.swift for macOS; ChatStore.swift for state management in DemoChat. 4. Implement tests: create under Tests/ directory (currently not visible in file list, so tests should be added here). 5. Update README: modify root README.md with new feature usage examples.
🪤Traps & gotchas
- API Key Management: Demo and DemoChat require OPENAI_API_KEY environment variable; APIKeyModalView.swift shows interactive setup, but headless usage will fail silently if unset. 2. Swift Version Lock: Package likely requires Swift 5.9+ for async/await; older toolchains will not compile. 3. Assistants API Beta Status: Thread/run operations may have undocumented state transitions; check OpenAI docs before filing bugs. 4. Provider Configuration: Using non-OpenAI providers (Gemini, DeepSeek) requires custom baseURL and auth headers; not auto-detected. 5. Streaming Cancellation: AsyncSequence streams don't auto-cancel; must manually break loop or call task.cancel() (see Cancelling requests in README).
💡Concepts to learn
- AsyncSequence and Streaming — OpenAI chat completions support server-sent event streams; AsyncSequence is Swift's native abstraction for lazy, async iteration over streamed tokens, essential for responsive UI updates
- Codable Protocol and JSON Serialization — All request/response models use Codable for bidirectional JSON mapping; understanding encoder/decoder strategies is critical for handling OpenAI's optional fields and nested structures
- Provider Pattern (Abstraction) — SDK supports OpenAI, Gemini, DeepSeek via a pluggable provider interface; grasping this pattern shows how to extend the package for new LLM backends
- Assistants API State Machine (Threads, Runs, Steps) — Assistants use a polling-based run status model (queued → in_progress → completed) with tool submission steps; understanding this flow is critical for reliable agentic workflows
- Model Context Protocol (MCP) — Emerging standard for tool communication between LLMs and environments; MCPToolsStore.swift shows integration; understanding MCP is key for building next-gen function-calling systems
- URLSession and HTTP Streaming — Foundation's URLSession with data(from:) and byte streaming enables efficient token-by-token reception; needed to implement low-latency chat streaming without blocking the main thread
- Structured Concurrency (Task, TaskGroup) — DemoChat uses Task and structured concurrency for managing multiple async operations (assistant runs, image uploads); critical for preventing race conditions and memory leaks
🔗Related repos
openai/openai-swift— Official OpenAI Swift SDK alternative; compare if this repo's community-driven approach offers advantages in API coverage or ease of useanthropic-ai/anthropic-sdk-swift— Anthropic's Claude SDK using similar async/await patterns; reference for multi-provider SDK design decisionsaws-amplify/amplify-swift— AWS's Swift ecosystem package; inspect for inspiration on streaming patterns and provider abstraction designMacPaw/KeychainAccess— Sibling MacPaw repo for secure API key storage; often integrated with OpenAI SDK for credential management on macOS/iOSpointfreeco/swift-composable-architecture— TCA framework used by sophisticated DemoChat implementations; study for state management patterns in conversational AI apps
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive unit tests for OpenAI API endpoints
The repo has CI workflows (swift.yml, codeql.yml) but no visible Tests directory in the file structure. Given this is a public SDK for OpenAI API, adding unit tests for core API interactions (chat completions, embeddings, image generation, assistants) would improve code quality and prevent regressions. This is especially important since Demo/DemoChat shows real usage patterns that should have corresponding test coverage.
- [ ] Create Tests/ directory at repo root with subdirectories mirroring Sources/ structure
- [ ] Add unit tests for chat/completions endpoints using mock URLSession
- [ ] Add unit tests for image creation/editing (referenced in Demo/DemoChat/Sources/UI/Images/)
- [ ] Add unit tests for assistants API (AssistantStore.swift in demo shows usage)
- [ ] Update swift.yml workflow to run tests with
swift testcommand
Add integration test workflow for OpenAI API compatibility
While codeql.yml and swift.yml exist, there's no visible integration test workflow. Given this is a community-maintained SDK that wraps OpenAI's public API, a separate workflow testing against real (or mocked) API responses would catch breaking changes when OpenAI updates their API. This could use snapshot testing or response validation.
- [ ] Create .github/workflows/integration-tests.yml workflow file
- [ ] Configure workflow to run on schedule (weekly) and on PRs to main
- [ ] Add step to validate API response models against known OpenAI schema structures
- [ ] Include step to test assistant, chat, image, and speech endpoints (seen in DemoChat)
- [ ] Add secrets management for test API key (read-only or rate-limited preferred)
Add API documentation generator for public SDK endpoints
The README shows installation but the Usage/Initialization section is incomplete. The repo contains sophisticated features (Assistants, Speech, MCPTools, Image editing) visible in Demo/DemoChat but lacks generated API docs. Adding a documentation generation setup (using Swift-DocC or equivalent) with CI integration would help contributors and users understand the full SDK surface.
- [ ] Create Sources/OpenAI/Documentation.docc directory structure for DocC integration
- [ ] Add documentation symbols to public types in main package (currently not visible in file list)
- [ ] Create .github/workflows/documentation.yml to build and deploy docs on releases
- [ ] Add sample code snippets in docs for: chat completions, image generation, assistants, speech endpoints
- [ ] Update README with link to generated documentation site
🌿Good first issues
- Add comprehensive unit tests for Models in Sources/ — currently no Test/ directory visible; start by writing Codable round-trip tests for ChatCompletionRequest and ChatCompletionResponse to ensure JSON serialization correctness.
- Extend DemoChat's LoggingMiddleware.swift to log token usage metrics (completion_tokens, prompt_tokens) from ChatCompletionResponse and persist to a local SQLite store for usage analytics dashboard.
- Document provider-specific configuration for Gemini and DeepSeek in README.md with complete code examples showing baseURL override, API key header setup, and any model-name mapping quirks (currently only lists provider support without concrete examples).
⭐Top contributors
Click to expand
Top contributors
- @nezhyborets — 60 commits
- @Krivoblotsky — 9 commits
- @JaredConover — 7 commits
- @batanus — 6 commits
- @neelvirdy — 5 commits
📝Recent commits
Click to expand
Recent commits
397a601— Merge pull request #414 from MacPaw/claude/add-dependabot-config (nezhyborets)fd447f9— Merge pull request #413 from MacPaw/claude/issue-392-readme-structured-outputs (nezhyborets)bede880— Merge pull request #412 from MacPaw/claude/issue-386-prompt-cache-key (nezhyborets)97e1956— Merge pull request #411 from MacPaw/claude/issue-258-completion-tokens-details (nezhyborets)2324e24— Merge pull request #410 from MacPaw/claude/strange-taussig-a81701 (nezhyborets)88534fc— Apply suggestion from @Copilot (nezhyborets)754b889— ci: add Dependabot config for github-actions and swift (Krivoblotsky)38b4501— docs(readme): clarify response_format vs text for structured outputs (Krivoblotsky)286f0e0— feat(responses): add prompt_cache_key to CreateModelResponseQuery (Krivoblotsky)9fff161— feat(chat): add completion_tokens_details to CompletionUsage (Krivoblotsky)
🔒Security observations
The OpenAI Swift package demonstrates reasonable security practices with a defined security reporting process and version support policy. However, there are medium-severity concerns regarding API key handling in the demo application and incomplete security documentation. The primary risks involve credential exposure in the interactive demo components and incomplete security policy communication. The codebase lacks visible evidence of security hardening measures such as certificate pinning and comprehensive dependency vulnerability scanning in the CI/CD pipeline. Overall security posture is acceptable for a community-maintained SDK, but improvements in credential handling and documentation are recommended.
- Medium · API Key Exposure in Demo Application —
Demo/App/APIKeyModalView.swift. The Demo/App/APIKeyModalView.swift file handles API key input and storage. Without explicit code review, there is potential risk of API keys being logged, cached insecurely, or exposed in memory dumps. The file structure suggests interactive API key handling which could lead to credential exposure. Fix: Ensure API keys are never logged, stored in UserDefaults without encryption, or kept in memory longer than necessary. Use Keychain for secure storage on iOS/macOS. Implement proper memory zeroing after use. - Medium · Incomplete Security Policy Documentation —
SECURITY.md. The SECURITY.md file appears incomplete (ends with 'We will' without finishing the sentence). This incomplete security policy may not adequately communicate the security process to users and researchers. Fix: Complete the SECURITY.md file with full details on response timeline, acknowledgment procedures, and disclosure policy. Clearly state expected response times and conditions for public disclosure. - Low · Demo Application API Key Handling —
Demo/App/APIProvidedView.swift, Demo/App/APIKeyModalView.swift. The Demo application contains APIProvidedView.swift and APIKeyModalView.swift which handle sensitive OpenAI API credentials. If the demo is published or shared, there's risk of API key exposure through memory analysis or debug logs. Fix: Ensure demo application does not include any default/hardcoded API keys. Implement secure input validation and sanitization. Document that users should use their own API keys and never commit them to version control. - Low · Dependency Management Transparency —
Package.swift, Package.resolved. While Package.resolved and Package.swift are present, the specific dependency versions and security status of OpenAI SDK dependencies are not visible in the provided file structure. This makes it difficult to assess if vulnerable transitive dependencies are included. Fix: Regularly audit dependencies using Swift package security tools. Implement automated dependency vulnerability scanning in CI/CD pipeline (consider adding OWASP Dependency-Check or similar). Keep dependencies updated to latest secure versions. - Low · Missing HTTPS/TLS Configuration Verification —
Sources/OpenAI/. As an OpenAI API client library, it should enforce TLS 1.2+ and certificate pinning for API communication, but this cannot be verified from the provided file structure. HTTP traffic would be a critical security issue. Fix: Verify that all API communications use HTTPS with TLS 1.2 or higher. Consider implementing certificate pinning for OpenAI API endpoints. Document these security requirements clearly.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.