RepoPilotOpen in app →

ChatGPTNextWeb/NextChat

✨ Light and Fast AI Assistant. Support: Web | iOS | MacOS | Android | Linux | Windows

WAIT

Slowing — last commit 5mo ago

  • Last commit 5mo ago
  • 5 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • Slowing — last commit 5mo ago
  • Small team — 5 top contributors
  • Concentrated ownership — top contributor handles 70% of commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Embed this verdict

[![RepoPilot: WAIT](https://repopilot.app/api/badge/chatgptnextweb/nextchat)](https://repopilot.app/r/chatgptnextweb/nextchat)

Paste into your README — the badge live-updates from the latest cached analysis.

Onboarding doc

Onboarding: ChatGPTNextWeb/NextChat

Generated by RepoPilot · 2026-05-05 · Source

Verdict

WAIT — Slowing — last commit 5mo ago

  • Last commit 5mo ago
  • 5 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Slowing — last commit 5mo ago
  • ⚠ Small team — 5 top contributors
  • ⚠ Concentrated ownership — top contributor handles 70% of commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

TL;DR

NextChat is a Next.js 14-based web/desktop AI chat interface that provides a unified frontend for multiple LLM providers (OpenAI, Claude, DeepSeek, Gemini, Baidu, Alibaba, ByteDance, etc.) with PWA support and Tauri-based native desktop apps for Windows/macOS/Linux. It solves the problem of needing separate clients for each AI provider by routing requests through a unified proxy layer (app/api/[provider]/[...path]/route.ts) and managing conversation state, custom system prompts ('masks'), and model settings in a single app. The app follows Next.js App Router conventions: app/api/ contains one file per LLM provider (openai.ts, anthropic.ts, google.ts, etc.) plus a catch-all proxy at app/api/[provider]/[...path]/route.ts that dispatches to them. The Tauri desktop shell lives alongside the Next.js app, and app/masks/ contains build-time prompt template generation (built via app/masks/build.ts before next build).

Who it's for

Developers and power users who want a self-hostable, multi-provider AI chat frontend — particularly those who want to run their own instance on Vercel/Docker/Zeabur to avoid vendor lock-in, manage API keys server-side, and customize prompts. Also for contributors wanting to add new LLM provider integrations.

Maturity & risk

The repo has a full CI/CD pipeline (.github/workflows/app.yml, docker.yml, test.yml), multi-platform builds via Tauri, and supports 10+ LLM providers, indicating substantial maturity. Tests exist (jest config, test:ci script) and there's a Husky pre-commit lint gate. It is production-ready and actively developed, with the iOS app recently launched and MCP support being actively added.

The dependency list is large (30+ runtime deps including mermaid, react-markdown, multiple rehype/remark plugins) and the codebase mixes client/server concerns heavily across app/api/* files for 10+ providers, increasing surface area for breaking changes when providers update APIs. Single-maintainer risk is real — the original author is Yidadaa and the org is ChatGPTNextWeb, but community contributions are accepted. The ENABLE_MCP=true build flag for MCP support suggests some features are behind experimental flags and not fully stabilized.

Active areas of work

MCP (Model Context Protocol) support is actively being integrated — the README calls it out with an ENABLE_MCP=true build flag and @modelcontextprotocol/sdk is a listed dependency. The iOS app was recently launched (source code forthcoming at ChatGPTNextWeb/NextChat-iOS). New providers like 302AI (app/api/302ai.ts) and SiliconFlow (app/api/siliconflow.ts) have recently been added.

Get running

git clone https://github.com/ChatGPTNextWeb/NextChat.git cd NextChat cp .env.template .env.local # Add your OPENAI_API_KEY and optionally CODE yarn install yarn dev

Daily commands: Dev server: yarn dev (concurrently runs mask:watch + next dev) Production build: yarn build (runs mask build then next build standalone) Desktop app dev: yarn app:dev Desktop app build: yarn app:build Export static: yarn export

Map of the codebase

  • app/components/chat.tsx — The primary chat UI component orchestrating message rendering, user input, streaming responses, and tool calls — the heart of the user experience.
  • app/client/api.ts — Defines the unified LLM client abstraction and provider routing logic that all platform integrations must conform to.
  • app/api/[provider]/[...path]/route.ts — Universal catch-all Next.js API route that proxies requests to any supported AI provider, making it the single server-side gateway for all LLM traffic.
  • app/api/common.ts — Shared server-side utilities including auth checking, request validation, and provider-agnostic proxy helpers used by every API route.
  • app/components/home.tsx — Root application shell that bootstraps state stores, renders the sidebar/chat layout, and handles routing between major views.
  • app/api/auth.ts — Implements access code and API key authentication logic that gates all backend proxy endpoints.
  • app/client/platforms/openai.ts — Reference implementation of the LLM client interface for OpenAI, serving as the template all other platform clients follow.

How to make changes

Add a new AI provider

  1. Create a client-side platform file implementing the LLM interface (chat, usage, models methods), following the OpenAI reference (app/client/platforms/openai.ts)
  2. Register the new provider in the client factory so getClientApi() returns your new client (app/client/api.ts)
  3. Create a server-side handler that injects auth headers and constructs the correct upstream URL for proxying (app/api/openai.ts)
  4. Wire the new provider into the universal catch-all route handler's provider switch/map (app/api/[provider]/[...path]/route.ts)
  5. Add provider-specific environment variables and defaults (.env.template)

Add a new UI panel or settings page

  1. Create a new React component file under app/components/ with its paired .module.scss for scoped styles (app/components/mask.tsx)
  2. Add a sidebar navigation entry and route the new panel in the root layout shell (app/components/home.tsx)
  3. If the panel needs persistent state, add a new Zustand store slice following existing store patterns (app/components/home.tsx)

Add a new server-side API endpoint

  1. Create a new route.ts under app/api/<feature>/ using Next.js App Router conventions with GET/POST handlers (app/api/config/route.ts)
  2. Import and call authCheck/validateRequest from common.ts to enforce access control (app/api/common.ts)
  3. Consume the new endpoint from a client platform or component using fetch with the /api/ prefix (app/client/api.ts)

Add a new prompt mask (system prompt template)

  1. Add a new mask definition object to the appropriate locale masks source file in app/masks/ (app/components/mask.tsx)
  2. Run the build script to regenerate the compiled masks bundle used at runtime (app/components/new-chat.tsx)

Why these technologies

  • Next.js App Router — Enables unified full-stack deployment — React UI and API proxy routes in one project — simplifying self-hosting and Vercel/Docker deploys without a separate backend service.
  • Zustand — Lightweight state management with built-in persistence middleware; keeps chat history, settings, and masks in localStorage without the boilerplate of Redux.
  • Tauri (Rust) — Provides native desktop apps for Windows/macOS/Linux from the same web codebase without Electron's memory overhead.
  • Server-side API proxy — Keeps user API keys off the client in self-hosted mode and allows the operator to inject a single shared key, preventing CORS issues and key exposure.
  • WebDAV / Upstash Redis sync — Offers provider-agnostic cloud sync without a dedicated database, letting users bring their own storage backend.

Trade-offs already made

  • All AI traffic routed through Next.js server proxy

    • Why: Hides API keys from the browser and centralizes auth/rate-limiting
    • Consequence: Adds ~50–200ms latency per request; server must handle streaming long-lived connections, which can exhaust serverless function timeouts on Vercel's free tier.
  • Zustand + localStorage for all state persistence

    • Why: Zero infrastructure required for self-hosters; instant load with no backend round-trip
    • Consequence: Chat history is device-local by default; no conflict resolution for concurrent edits; large histories can hit localStorage size limits.
  • Single universal catch-all API route for all providers

    • Why: Minimizes route file proliferation and centralizes proxy logic
    • Consequence: Provider-specific edge cases must be handled inside one route handler, increasing its complexity and making provider isolation harder to test.
  • Masks pre-compiled at build time via a build script

    • Why: Enables type-safe, tree-shaken prompt templates without runtime file I/O
    • Consequence: Adding or editing masks requires a rebuild; masks cannot be added dynamically at runtime without a custom API.

Non-goals (don't propose these)

  • Does not implement its

Traps & gotchas

  1. yarn mask MUST run before next build — the build.ts script generates required JSON files; skipping it breaks mask/system-prompt features. 2) The ENABLE_MCP=true environment variable must be set at BUILD TIME (not runtime) to include MCP support. 3) The CODE env var sets a comma-separated access password list; if unset, the app is open to anyone — critical for public deployments. 4) Desktop builds require Rust + Tauri CLI installed separately; yarn app:build will fail without them. 5) Several providers (Baidu, iFlytek, GLM) have non-standard auth flows implemented in their respective api/*.ts files that require provider-specific env vars beyond just an API key.

Architecture

Concepts to learn

  • Server-Sent Events (SSE) streaming — All LLM providers stream token responses via SSE; NextChat uses @fortaine/fetch-event-source to consume these streams and relay them to the browser, and every provider handler must correctly handle chunked streaming responses.
  • Next.js App Router catch-all segments — The [...path] dynamic segment in app/api/[provider]/[...path]/route.ts is what allows NextChat to transparently proxy arbitrary sub-paths for each provider (e.g. /api/openai/v1/chat/completions) without hardcoding every endpoint.
  • Model Context Protocol (MCP) — MCP is the emerging standard for AI models to call external tools/resources; NextChat is integrating it via @modelcontextprotocol/sdk gated behind ENABLE_MCP=true, making it a key upcoming architectural feature.
  • Tauri (Rust-based desktop shell) — NextChat uses Tauri instead of Electron to produce native desktop apps with a much smaller bundle size; the Rust source is in the repo and yarn app:build compiles it — contributors modifying desktop behavior need to understand this layer.
  • IndexedDB via idb-keyval — Chat history, settings, and masks are persisted in the browser's IndexedDB (not a backend DB) using idb-keyval, meaning all user data is local-first — important for understanding data portability and the lack of a user account system.
  • KaTeX math rendering — rehype-katex is in the pipeline for rendering LaTeX math expressions in AI responses, a non-obvious feature that requires the katex CSS to be loaded client-side and affects markdown parsing configuration.

Related repos

  • mckaywrigley/chatbot-ui — Direct alternative — another open-source, self-hostable multi-provider AI chat UI built with Next.js targeting the same use case.
  • lobehub/lobe-chat — Close alternative in the same ecosystem: feature-rich self-hostable AI chat with multi-provider support, plugin system, and desktop app.
  • openai/openai-node — Companion SDK — NextChat's openai.ts provider handler wraps OpenAI-compatible APIs; understanding this SDK clarifies the streaming/response format.
  • tauri-apps/tauri — The Rust-based framework used to build NextChat's Windows/macOS/Linux desktop apps via yarn app:build.
  • ChatGPTNextWeb/NextChat-iOS — The companion iOS app repo (source coming soon) that pairs with this web/desktop codebase for the full NextChat ecosystem.

PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add unit tests for app/api/auth.ts and app/api/common.ts

The repo has a test CI workflow (.github/workflows/test.yml) and Jest configured, but the critical authentication and common API utility logic in app/api/auth.ts and app/api/common.ts appear to have no dedicated test coverage. These files are foundational — auth.ts handles API key validation and access control, and common.ts handles shared proxy/fetch logic used by all provider routes. Bugs here affect every provider integration. Adding tests would catch regressions when new providers are added.

  • [ ] Audit app/api/auth.ts to identify all exported functions and their branches (e.g. key validation, bearer token checks)
  • [ ] Create tests/api/auth.test.ts with unit tests covering: valid API key, missing API key, malformed token, and access code scenarios
  • [ ] Audit app/api/common.ts for shared request/response helpers and error handling paths
  • [ ] Create tests/api/common.test.ts covering fetch error handling, header injection, and proxy URL construction
  • [ ] Run 'yarn test:ci' to confirm tests pass in the existing Jest setup

Refactor app/api/[provider]/[...path]/route.ts to eliminate per-provider duplication across provider API files

The file list shows over 15 provider-specific API files (openai.ts, anthropic.ts, google.ts, baidu.ts, bytedance.ts, deepseek.ts, alibaba.ts, glm.ts, etc.) all likely following the same pattern of request proxying, auth header injection, and error handling. The single catch-all route at app/api/[provider]/[...path]/route.ts should be leveraging a unified handler, but the existence of so many nearly identical provider files suggests significant code duplication. Consolidating shared logic reduces maintenance burden when e.g. adding streaming support or changing error formats.

  • [ ] Read each provider file (openai.ts, anthropic.ts, baidu.ts, bytedance.ts, deepseek.ts, alibaba.ts, glm.ts, xai.ts, moonshot.ts, siliconflow.ts, iflytek.ts, 302ai.ts) and document which sections are identical vs. provider-specific
  • [ ] Extract a shared createProviderHandler(config: ProviderConfig) factory function into app/api/common.ts or a new app/api/providerFactory.ts
  • [ ] Migrate at least 3 of the simpler providers (e.g. deepseek.ts, xai.ts, moonshot.ts) to use the factory, keeping provider-specific auth/base-URL as config
  • [ ] Ensure the existing CI test workflow still passes and manually verify proxying works for migrated providers
  • [ ] Update the route.ts dynamic handler to use the new factory pattern

Add unit tests for app/client/platforms/* API client implementations

The app/client/platforms/ directory contains client-side API wrappers for every provider (anthropic.ts, alibaba.ts, baidu.ts, bytedance.ts, deepseek.ts, ai302.ts, etc.) which handle request construction, streaming response parsing, and error mapping. These are complex, stateful, and directly affect the user-facing chat experience. Currently there are no visible tests for these files despite Jest being configured. A regression in any platform client (e.g. broken streaming parser) would silently break chat for all users of that provider.

  • [ ] Identify the common interface/base class in app/client/api.ts that all platform clients implement
  • [ ] Create tests/client/platforms/anthropic.test.ts: mock fetch, test request payload construction, test SSE stream parsing, and test error response handling
  • [ ] Create tests/client/platforms/deepseek.test.ts with equivalent coverage for DeepSeek-specific request formatting
  • [ ] Create tests/client/platforms/alibaba.test.ts covering Alib

Good first issues

  1. Add unit tests for app/api/auth.ts — the test suite exists (jest config, test:ci script) but auth logic is critical and likely undertested given it gates all API access. 2) Add a new provider integration for Mistral AI following the pattern in app/api/siliconflow.ts — the catch-all route already supports arbitrary providers, just needs a new handler file. 3) Document the mask/system-prompt build pipeline in a CONTRIBUTING.md — the app/masks/build.ts pre-build step is a non-obvious gotcha not explained beyond the package.json scripts.

Top contributors

Recent commits

  • c3b8c15 — Merge pull request #6637 from princeaden1/feat-xai-new-models (Leizhenpeng)
  • 12ec11e — feat: new models for xAI (#6559) (princeaden1)
  • 995bef7 — Merge pull request #6599 from DreamRivulet/add-support-GPT5 (Leizhenpeng)
  • 38ac502 — Add support for GPT5 (DreamRivulet)
  • 0511808 — use max_completion_tokens (DreamRivulet)
  • 42eff64 — use max_completion_tokens (DreamRivulet)
  • 8ae6883 — add gpt-5 (DreamRivulet)
  • c0f2ab6 — add gpt-5 (DreamRivulet)
  • 557a2cc — chore: update version (Leizhenpeng)
  • a2a2664 — Merge pull request #6572 from jerryno6/main (Leizhenpeng)

Security observations

  • Critical · API Keys Exposed as Empty Environment Variables in Dockerfile — Dockerfile, docker-compose.yml. The Dockerfile explicitly sets OPENAI_API_KEY, GOOGLE_API_KEY, and CODE as empty environment variables in both the builder and runner stages. While they are empty by default, this pattern encourages passing secrets via environment variables which can be exposed in Docker inspect output, logs, and container metadata. Additionally, the docker-compose.yml passes these directly from host environment without validation. Fix: Use Docker secrets or a secrets management solution (e.g., HashiCorp Vault, AWS Secrets Manager). Never define sensitive environment variable names in Dockerfile. Use .env files excluded from version control for docker-compose, and document this clearly.
  • Critical · Potential Server-Side Request Forgery (SSRF) via Proxy and API Route Handlers — app/api/proxy.ts, app/api/[provider]/[...path]/route.ts, app/api/common.ts. The application contains multiple API proxy endpoints (app/api/proxy.ts, app/api/[provider]/[...path]/route.ts, app/api/webdav/[...path]/route.ts, app/api/upstash/[action]/[...key]/route.ts) that forward requests to external services. If user-controlled input is used to construct target URLs without strict validation and allowlisting, attackers can abuse these endpoints to make the server send requests to internal network resources, cloud metadata endpoints (e.g., 169.254.169.254), or other unintended destinations. Fix: Implement a strict allowlist of permitted target hosts/URLs. Validate and sanitize all user-supplied URL parameters before use. Block requests to private IP ranges, loopback addresses, and cloud metadata endpoints. Use a dedicated HTTP client with restricted capabilities.
  • High · XSS Risk via Artifacts Rendering Endpoint — app/components/artifacts.tsx, app/api/artifacts/route.ts. The presence of app/api/artifacts/route.ts and app/components/artifacts.tsx suggests the application renders user-provided or AI-generated HTML/code artifacts. If user-generated content is rendered without proper sanitization, this could lead to Cross-Site Scripting (XSS) attacks. React's dangerouslySetInnerHTML or direct HTML injection in artifact rendering is a common vulnerability in AI chat applications. Fix: Sanitize all AI-generated or user-provided HTML using a library like DOMPurify before rendering. Render artifacts in a sandboxed iframe with appropriate sandbox attributes (e.g., sandbox='allow-scripts' without allow-same-origin). Implement a strict Content Security Policy.
  • High · Insufficient Access Control on API Routes (Weak or Optional Authentication) — app/api/auth.ts, app/api/openai.ts, app/api/anthropic.ts, app/api/google.ts, app/api/common.ts. The app/api/auth.ts file handles authentication, but multiple API route handlers exist for various providers (openai.ts, anthropic.ts, google.ts, baidu.ts, etc.). If authentication checks are not consistently enforced across all routes, attackers could bypass auth and directly invoke AI provider APIs, leading to API key abuse, quota exhaustion, and unauthorized data access. The CODE environment variable appears to be the sole access control mechanism, which is a weak shared-secret approach. Fix: Ensure authentication middleware is applied consistently to all API routes. Replace the single shared CODE secret with proper user authentication (e.g., OAuth, JWT). Implement rate limiting per user/IP. Add audit logging for all API calls.
  • High · Insecure MCP Configuration File with World-Writable Permissions — Dockerfile (RUN mkdir -p /app/app/mcp && chmod 777 /app/app/mcp). The Dockerfile executes 'chmod 777 /app/app/mcp' which grants world-readable, world-writable, and world-executable permissions to the MCP configuration directory. This allows any process running in the container (including potentially compromised application code) to modify MCP configuration files, potentially enabling privilege escalation or malicious configuration injection. Fix: Use the principle of least privilege. Set directory permissions to 750 or 755 at most, owned by the application user. Create a dedicated non-root user in the Dockerfile and run the application as that user. Use 'chmod 640' for configuration files.
  • High · Application Runs as Root in Docker Container — undefined. The Fix: undefined

LLM-derived; treat as a starting point, not a security audit.

Where to read next


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

WAIT · ChatGPTNextWeb/NextChat — RepoPilot Verdict