RepoPilotOpen in app →

sashabaranov/go-openai

OpenAI ChatGPT, GPT-5, GPT-Image-1, Whisper API clients for Go

Healthy

Healthy across all four use cases

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 7mo ago
  • 58+ active contributors
  • Distributed ownership (top contributor 19% of recent commits)
Show all 7 evidence items →
  • Apache-2.0 licensed
  • CI configured
  • Tests present
  • Slowing — last commit 7mo ago

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/sashabaranov/go-openai)](https://repopilot.app/r/sashabaranov/go-openai)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/sashabaranov/go-openai on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: sashabaranov/go-openai

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/sashabaranov/go-openai shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 7mo ago
  • 58+ active contributors
  • Distributed ownership (top contributor 19% of recent commits)
  • Apache-2.0 licensed
  • CI configured
  • Tests present
  • ⚠ Slowing — last commit 7mo ago

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live sashabaranov/go-openai repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/sashabaranov/go-openai.

What it runs against: a local clone of sashabaranov/go-openai — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in sashabaranov/go-openai | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 228 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>sashabaranov/go-openai</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of sashabaranov/go-openai. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/sashabaranov/go-openai.git
#   cd go-openai
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of sashabaranov/go-openai and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "sashabaranov/go-openai(\\.git)?\\b" \\
  && ok "origin remote is sashabaranov/go-openai" \\
  || miss "origin remote is not sashabaranov/go-openai (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "client.go" \\
  && ok "client.go" \\
  || miss "missing critical file: client.go"
test -f "chat.go" \\
  && ok "chat.go" \\
  || miss "missing critical file: chat.go"
test -f "config.go" \\
  && ok "config.go" \\
  || miss "missing critical file: config.go"
test -f "common.go" \\
  && ok "common.go" \\
  || miss "missing critical file: common.go"
test -f "internal/request_builder.go" \\
  && ok "internal/request_builder.go" \\
  || miss "missing critical file: internal/request_builder.go"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 228 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~198d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/sashabaranov/go-openai"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

go-openai is an unofficial Go client library for OpenAI's APIs (ChatGPT, GPT-4, DALL·E, Whisper). It wraps HTTP calls to platform.openai.com with idiomatic Go types and methods, supporting chat completions (streaming and non-streaming), image generation, audio transcription, embeddings, and batch processing. The library handles authentication, request/response marshaling, and error handling for OpenAI's REST endpoints. Flat package structure at repo root: core client (client.go, config.go, error.go) handles authentication and HTTP transport; feature files (chat.go, audio.go, embeddings.go, assistant.go, batch.go, files.go) define request/response types and API methods; corresponding *_test.go and *_api_test.go files provide unit and integration tests; examples/ directory contains runnable demos (chatbot/, completion/, images/, voice-to-text/). No internal packages—all public exports.

👥Who it's for

Go developers building applications that integrate with OpenAI models—chatbots, RAG systems, image generators, voice transcription tools. Both application developers using the library and contributors maintaining the client wrapper itself.

🌱Maturity & risk

Production-ready and actively maintained. The repo shows comprehensive test coverage (.codecov.yml configured, *_test.go files throughout), CI/CD pipelines (integration-tests.yml, pr.yml), and golangci linter configuration. Go 1.18+ requirement suggests modern, well-scoped API design. Regular examples (examples/ directory with concrete use cases) and clear README indicate stability and ease of adoption.

Low risk for active API coverage but medium risk for maintenance: single maintainer (sashabaranov) is a potential concern if unavailable. OpenAI's rapid API evolution means keeping up with new models and endpoints requires continuous updates—no indication of delayed sync in recent commits, but worth monitoring. No evident breaking changes policy documented, so upgrades between major versions may require code changes.

Active areas of work

Recent additions visible in file list: assistant.go/assistant_test.go (Assistants API), batch.go/batch_test.go (Batch API), chat_stream.go/chat_stream_test.go (streaming completions refactor). Workflows suggest active integration testing (integration-tests.yml) and PR validation. Project tracking new OpenAI model releases and API endpoints as they launch.

🚀Get running

git clone https://github.com/sashabaranov/go-openai.git
cd go-openai
go mod download
go test ./...

Create a .env or pass OPENAI_API_KEY for integration tests. See examples/chatbot/main.go or examples/completion/main.go for runnable entry points.

Daily commands: No server to run—this is a client library. Instead: go test ./... to run unit tests; set OPENAI_API_KEY environment variable and run go test -run TestIntegration (or similar) to hit live OpenAI API (see api_integration_test.go). Run examples: go run examples/chatbot/main.go (requires API key in environment).

🗺️Map of the codebase

  • client.go — Core OpenAI client struct and constructor that all API calls route through; defines authentication and HTTP configuration.
  • chat.go — Main ChatGPT completion API implementation; demonstrates the request/response pattern used across all endpoints.
  • config.go — Client configuration and option builders; essential for understanding how to customize API behavior.
  • common.go — Shared data structures and constants used across all API endpoints; defines error types and base request/response formats.
  • internal/request_builder.go — HTTP request construction abstraction; all API calls use this to build and execute requests to OpenAI.
  • internal/unmarshaler.go — JSON unmarshaling utilities; handles response parsing and error extraction from OpenAI API responses.

🛠️How to make changes

Add a new OpenAI API endpoint

  1. Define request and response structs in a new file (e.g., newfeature.go) following the pattern: NewFeatureRequest struct with json tags and NewFeatureResponse struct (newfeature.go)
  2. Add request/response constants and types to common.go (e.g., const APIURLNewFeature = "/v1/new-feature") (common.go)
  3. Implement the API method on Client struct (e.g., func (c *Client) CreateNewFeature(ctx context.Context, req NewFeatureRequest) (*NewFeatureResponse, error)) using c.requestAndUnmarshal() (newfeature.go)
  4. Write unit tests in newfeature_test.go using the internal/test helpers and test server from internal/test/server.go (newfeature_test.go)
  5. If streaming is needed, implement streaming variant using stream.go pattern with server-sent events parsing via stream_reader.go (newfeature_stream.go)

Add support for a new request parameter or response field

  1. Add the field to the relevant request or response struct with appropriate json tags and type (chat.go or embeddings.go (as example))
  2. If the field needs special marshaling (e.g., null vs omit), add custom MarshalJSON logic or use internal/marshaller.go helpers (internal/marshaller.go)
  3. Update the corresponding test file to include test cases for the new field (chat_test.go or embeddings_test.go (as example))
  4. If field affects JSON schema validation, update jsonschema/json.go or jsonschema/validate.go (jsonschema/validate.go)

Handle a new OpenAI error response type

  1. Define the error struct in error.go following the pattern of existing error types (e.g., RequestError, APIError) (error.go)
  2. Update the unmarshaling logic in internal/unmarshaler.go to recognize and parse the new error type from response body (internal/unmarshaler.go)
  3. Add test cases in error_test.go verifying the error is correctly parsed and typed (error_test.go)
  4. Document the error condition in code comments and update README if it represents a new failure mode (error.go)

Add multipart form file upload support to an endpoint

  1. Define request struct with io.Reader or *os.File field for file data in the endpoint file (e.g., files.go pattern) (files.go)
  2. Use internal/form_builder.go to construct multipart form body with file field and metadata (internal/form_builder.go)
  3. Call client.requestAndUnmarshal() with the form body and Content-Type multipart/form-data header (files.go)
  4. Write integration tests using internal/test/server.go to mock multipart uploads (files_api_test.go)

🔧Why these technologies

  • Go standard library (net/http, encoding/json) — Minimal external dependencies for API client library; faster builds and fewer security surface area for users
  • Functional options pattern (config.go) — Flexible client configuration without breaking API changes; allows custom BaseURL, headers, and HTTP clients
  • Server-sent events (stream_reader.go, chat_stream.go) — Native support for OpenAI's streaming responses; enables real-time token streaming for chat and completions
  • context.Context throughout APIs — Cancellation, timeouts, and deadline propagation standard in Go; critical for production use with rate limits

⚖️Trade-offs already made

  • Unofficial library (not maintained by OpenAI)

    • Why: Faster iteration and community-driven feature additions vs. official library stability guarantees
    • Consequence: May lag behind OpenAI API changes; users assume upgrade risk
  • No built-in retry logic or exponential backoff

    • Why: Keeps library lean and lets users implement retry strategy matching their SLA vs. adding complexity
    • Consequence: Users must handle transient failures themselves; no built-in rate limit backoff
  • Streaming returns an iterator vs. callback

    • Why: Go idiomatic (channels, range-based iteration) vs. callback hell; easier composition
    • Consequence: Must manage goroutine lifecycle; cannot fire-and-forget stream consumption
  • Single Client struct for all API endpoints

    • Why: Unified auth and config sharing; simpler
    • Consequence: undefined

🪤Traps & gotchas

API key secret: OPENAI_API_KEY environment variable must be set for integration tests and examples—no fallback; test suite will silently skip integration tests if missing. Streaming protocol: chat_stream.go uses Server-Sent Events (text/event-stream Content-Type)—SSE parsing is finicky; Recv() reads until io.EOF or error. BaseURL override: CustomAPIURL field in ClientConfig changes the endpoint; used for proxies or custom OpenAI-compatible servers (vLLM, LMStudio)—easy to misconfigure. Model string constants: GPT3Dot5Turbo, GPT4, etc., are hardcoded in chat.go (no enum generation)—new model names require manual library updates. No retry logic: Client doesn't retry failed requests; caller is responsible for backoff/retry (useful for rate limits).

🏗️Architecture

💡Concepts to learn

  • Server-Sent Events (SSE) — go-openai's chat_stream.go decodes SSE frames to stream ChatCompletion deltas in real-time; understanding SSE framing (newline-delimited, data: prefix) is essential to contributing to streaming features
  • Bearer Token Authentication — go-openai attaches OpenAI API keys as 'Authorization: Bearer {key}' headers in client.go; key rotation, expiry handling, and org ID routing depend on this pattern
  • JSON Marshaling / Unmarshaling — All OpenAI request/response types (ChatCompletionRequest, CreateImageRequest, etc.) are Go structs with json tags; encoding/json marshals these to REST payloads and decodes responses—subtle issues with omitempty, nested pointers, and custom types are common
  • Context Propagation & Cancellation — Every client method (CreateChatCompletion, CreateImage, etc.) takes context.Context as first arg; timeouts, cancellation, and request-scoped values flow through ctx—critical for graceful shutdown and handling streaming cleanup
  • Delta Streaming Pattern — ChatCompletion streaming returns Delta objects with partial Content fields, not full Message objects; aggregating deltas (trim, concat) is the caller's responsibility—go-openai exposes this pattern but doesn't hide it
  • Functional Options Pattern (Builder) — go-openai uses struct field defaults for optional parameters (e.g., MaxTokens, Temperature in ChatCompletionRequest) rather than separate builder methods—understanding which fields are zero-valued and how they're handled by the API is key to debugging requests
  • Idempotent Versioning & Model Constants — go-openai hardcodes model names (GPT3Dot5Turbo, GPT4) as string constants; when OpenAI releases new models, library updates lag—understanding this constraint helps explain why version bumps matter and why some users override model strings
  • openai/openai-go — Official OpenAI Go SDK—alternative with guaranteed API parity and support, but less community-driven; go-openai fills the gap for developers preferring a simpler, more agile wrapper
  • openai/openai-python — Official OpenAI Python SDK—reference implementation; go-openai often mirrors its API design and response types for consistency
  • jmorganca/ollama — Local LLM inference engine with OpenAI API compatibility—go-openai clients can point CustomAPIURL to Ollama and use the same code against local models
  • lm-sys/FastChat — Open-source LLM serving with OpenAI-compatible endpoints (vLLM backend)—go-openai can integrate via BaseURL override for multi-model inference
  • go-openai/go-openai-realtime — Sibling project extending go-openai with WebSocket support for OpenAI's Realtime API—builds on this client's architecture for voice/audio streaming

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive integration tests for streaming responses in chat_stream.go

The repo has chat_stream.go and chat_stream_test.go, but looking at the file structure, there's no dedicated stream-specific integration test file (like chat_stream_api_test.go). Streaming is a critical feature for real-time applications, and integration tests would catch edge cases around connection handling, partial message buffering, and error recovery that unit tests miss. This aligns with existing patterns like audio_api_test.go and image_api_test.go.

  • [ ] Create chat_stream_api_test.go following the pattern of audio_api_test.go
  • [ ] Add tests for stream cancellation, timeout handling, and partial message deserialization
  • [ ] Test streaming with various models (gpt-4, gpt-3.5-turbo) to verify API compatibility
  • [ ] Add streaming error handling tests (network errors, malformed chunks, server errors)

Add missing unit tests for error.go error types and unwrapping

error.go exists but error_test.go likely has limited coverage for the custom error types used throughout the codebase. The library has multiple error scenarios (API errors, validation errors, streaming errors) but there's no dedicated test file verifying error message formatting, error code parsing, and error unwrapping behavior. This is critical for users who need to handle specific error conditions.

  • [ ] Expand error_test.go to cover all custom error types defined in error.go
  • [ ] Add tests for error.Is() and error.As() compatibility for Go 1.18+ error wrapping patterns
  • [ ] Test error message parsing from various OpenAI API error responses
  • [ ] Add tests for rate limit errors and timeout errors specifically

Add API version and deprecation handling tests in config_test.go

The config.go file manages API configuration and likely needs to handle future API version changes and deprecated endpoints. Currently config_test.go probably covers basic configuration, but there are no tests for handling API version deprecation warnings, fallback behavior, or migration paths. As OpenAI frequently updates their API (GPT-4 → 4o → o1), this prevents regressions when endpoints are versioned or deprecated.

  • [ ] Add tests in config_test.go to verify handling of deprecated model names with automatic fallbacks
  • [ ] Test API version header injection and validation in requests
  • [ ] Add tests for configuration validation warnings (e.g., using deprecated endpoints)
  • [ ] Document expected behavior in internal/test/checks/checks.go for version compatibility assertions

🌿Good first issues

  • Add missing test coverage for error.go—the file has no corresponding error_test.go yet; write tests that verify APIError.Error() format, nil checks, and RequestError differentiation.
  • Create integration test example in examples/ that shows batching multiple prompts with Batch API (batch.go exists but has no runnable example like chatbot/ or completion/).
  • Document the streaming response protocol in chat_stream.go with inline comments explaining SSE framing, Delta payload structure, and stop reason handling—current code lacks docstrings.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 5d7a276 — Stop stripping dots in azure model mapper for models that aren't 3.5 based (#1079) (krissetto)
  • 8e5611c — Add Verbosity parameter to Chat Completion Request (#1064) (azdaev)
  • f71d1a6 — feat: add safety_identifier params (#1066) (liushuangls)
  • f7d6ece — add GPT-5 model constants and reasoning validation (#1062) (behzadsp)
  • c4273cb — fix(chat): shorten comment to pass linter (#1050) (sashabaranov)
  • 4f87294 — Add GuidedChoice to ChatCompletionRequest (#1034) (AyushSawant18588)
  • 181c0e8 — Add tests for internal utilities (#1044) (sashabaranov)
  • e6c1d3e — Increase jsonschema test coverage (#1043) (sashabaranov)
  • bd36c45 — Support for extra_body parameter for embeddings API (#906) (AyushSawant18588)
  • 3bb1014 — ci: enable version compatibility vet (#1042) (sashabaranov)

🔒Security observations

The go-openai library demonstrates reasonable security practices as a well-structured API client library. No critical vulnerabilities were identified from the static analysis of the file structure and README. However, several medium and low-severity concerns exist: potential API key exposure in example code, unverified HTTP client security configurations, missing security documentation (SECURITY.md), and lack of visible security-focused testing. The library's security posture is moderate and would benefit from explicit security documentation, secure default configurations, and enhanced security testing practices. Developers using this library should implement their own API key management strategies and keep dependencies updated.

  • Medium · API Key Exposure in Examples — README.md, examples/ directory. The README.md contains example code showing API key usage with placeholder 'your token'. While this is intentional documentation, developers may inadvertently commit actual API keys following this pattern. The examples directory may also contain hardcoded credentials if developers copy-paste without proper secret management. Fix: Add clear warnings in documentation about never committing real API keys. Recommend using environment variables (os.Getenv) or secure secret management tools. Consider adding pre-commit hooks to detect and prevent API key commits.
  • Medium · Potential HTTP Client Security Configuration — client.go, config.go. As an OpenAI API client library, the codebase likely creates HTTP clients to communicate with external APIs. Without visibility into client.go and related files, there's a risk of missing security configurations such as: certificate validation bypass, missing timeout configurations, or insufficient TLS version enforcement. Fix: Verify that HTTP clients enforce: TLS 1.2+ only, proper certificate validation, reasonable connection timeouts, request timeouts, and that InsecureSkipVerify is never set to true in production code.
  • Low · Missing CHANGELOG Security Tracking — Repository root. The repository structure shows no visible CHANGELOG.md or SECURITY.md file for tracking security updates and vulnerability disclosures. This makes it difficult for users to identify security-related releases. Fix: Create a SECURITY.md file documenting security disclosure procedures and vulnerability reporting. Maintain a CHANGELOG.md clearly marking security-related fixes with a [SECURITY] tag.
  • Low · Minimal Dependency Constraints — go.mod. The go.mod shows 'go 1.18' with likely minimal explicit version pinning. While Go modules provide better dependency management than some languages, transitive dependencies of indirect packages could introduce vulnerabilities. Fix: Regularly run 'go get -u' followed by security audits. Use 'go mod verify' to ensure module integrity. Consider using tools like Nancy or Trivy for continuous vulnerability scanning of dependencies.
  • Low · Test Coverage for Security-Sensitive Code — Test files throughout codebase. While the codebase has comprehensive test files (*_test.go), security-specific tests (e.g., authentication bypass attempts, credential leakage, malformed response handling) may be limited. Fix: Add dedicated security-focused tests for: authentication credential handling, secure error message handling (avoiding credential leaks in error messages), response parsing from untrusted input, and timeout behaviors.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · sashabaranov/go-openai — RepoPilot