RepoPilotOpen in app →

RunanywhereAI/runanywhere-sdks

Production ready toolkit to run AI locally

Mixed

Mixed signals — read the receipts

worst of 4 axes
Use as dependencyConcerns

non-standard license (Other)

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 2d ago
  • 3 active contributors
  • Other licensed
Show 5 more →
  • CI configured
  • Tests present
  • Small team — 3 contributors active in recent commits
  • Concentrated ownership — top contributor handles 77% of recent commits
  • Non-standard license (Other) — review terms
What would change the summary?
  • Use as dependency ConcernsMixed if: clarify license terms

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Forkable" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Forkable
[![RepoPilot: Forkable](https://repopilot.app/api/badge/runanywhereai/runanywhere-sdks?axis=fork)](https://repopilot.app/r/runanywhereai/runanywhere-sdks)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/runanywhereai/runanywhere-sdks on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: RunanywhereAI/runanywhere-sdks

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/RunanywhereAI/runanywhere-sdks shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

WAIT — Mixed signals — read the receipts

  • Last commit 2d ago
  • 3 active contributors
  • Other licensed
  • CI configured
  • Tests present
  • ⚠ Small team — 3 contributors active in recent commits
  • ⚠ Concentrated ownership — top contributor handles 77% of recent commits
  • ⚠ Non-standard license (Other) — review terms

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live RunanywhereAI/runanywhere-sdks repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/RunanywhereAI/runanywhere-sdks.

What it runs against: a local clone of RunanywhereAI/runanywhere-sdks — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in RunanywhereAI/runanywhere-sdks | Confirms the artifact applies here, not a fork | | 2 | License is still Other | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | Last commit ≤ 32 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>RunanywhereAI/runanywhere-sdks</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of RunanywhereAI/runanywhere-sdks. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/RunanywhereAI/runanywhere-sdks.git
#   cd runanywhere-sdks
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of RunanywhereAI/runanywhere-sdks and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "RunanywhereAI/runanywhere-sdks(\\.git)?\\b" \\
  && ok "origin remote is RunanywhereAI/runanywhere-sdks" \\
  || miss "origin remote is not RunanywhereAI/runanywhere-sdks (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Other)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Other\"" package.json 2>/dev/null) \\
  && ok "license is Other" \\
  || miss "license drift — was Other at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 32 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~2d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/RunanywhereAI/runanywhere-sdks"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

RunAnywhere is a production-ready multi-platform SDK toolkit for running large language models, speech-to-text (Whisper), text-to-speech, and vision models entirely on-device with zero cloud dependency. Written in C++ for the core engine with language bindings for Swift (iOS/macOS), Kotlin (Android), TypeScript/JavaScript, Dart (Flutter), and others, it enables private, offline AI inference across mobile, desktop, and web platforms. Multi-platform monorepo structured by language/platform: core C++ engine (likely in cpp/ or core/) compiled into native libraries, then wrapped via language-specific SDKs in swift/, kotlin/, typescript/, dart/ directories; Playground/YapRun contains an example iOS Swift app. GitHub Actions orchestrate cross-platform builds and releases; .idea/ IntelliJ configs indicate development on JVM + native toolchain simultaneously.

👥Who it's for

Mobile and native app developers (iOS, Android, Flutter) who need to add LLM chat, voice assistants, or vision AI to their apps without sending user data to cloud APIs; also relevant to embedded systems engineers and web developers building offline-capable AI features.

🌱Maturity & risk

Production-ready with active development: available on both App Store and Google Play, shipped via multiple language SDKs, configured with GitHub Actions CI/CD (pr-build.yml, release.yml, secret-scan.yml), and organized with professional tooling (.pre-commit-config.yaml, .gitleaks.toml). Substantial codebase (3.1M lines C++, 1.9M Kotlin, 1.4M C) suggests maturity; however, exact commit recency is not visible from file list alone.

Primary risk is multi-platform maintenance complexity: 10+ language bindings (C++, Swift, Kotlin, Dart, TypeScript, etc.) increase surface area for bugs and breaking changes across releases. Dependency on external models (Llama, Mistral, Whisper weights) and potential version mismatches between native libs (.github/actions/setup-toolchain/action.yml) and language wrappers. Single SDK namespace suggests possible monorepo coordination overhead.

Active areas of work

Active release pipeline visible via .github/workflows/release.yml and auto-tag.yml; recent PR/build validation via pr-build.yml. IntelliJ run configurations (.idea/runConfigurations/) show ongoing work on full SDK builds, Android app packaging, and plugin development. Multi-platform builds (C++, Kotlin, native lib copying) suggest active feature parity work across platforms.

🚀Get running

git clone https://github.com/RunanywhereAI/runanywhere-sdks.git
cd runanywhere-sdks
# For Swift/iOS development:
swift build
# For Kotlin/Android:
# Use IntelliJ IDEA run configs in .idea/runConfigurations/ (e.g., '01_Build_Full_SDK__C____Kotlin_.xml')
# For TypeScript/JS (web extension):
cd <web-module> && npm install && npm run dev

Daily commands: TypeScript/web module: npm install && npm run dev (dev server with Vite watch mode) or npm run build (production bundle). Swift: swift build or open Playground/YapRun/YapRun.xcodeproj in Xcode. Kotlin/Android: Use IntelliJ run configuration '06_Build_Android_App.xml' or '07_Run_Android_App.xml'. Full build: .idea/runConfigurations/12_Build_All.xml.

🗺️Map of the codebase

🛠️How to make changes

New language binding: Start in the root-level sdk-<language>/ directory (e.g., swift/, kotlin/, typescript/); core C++ changes go in cpp/ or equivalent. Mobile app updates: Playground/YapRun for Swift examples, similar pattern for Android. Web/extension features: Modify TypeScript source feeding into Vite build pipeline; Chrome manifest handled by @crxjs plugin. CI/release logic: .github/workflows/ YAML files. Toolchain setup: .github/actions/setup-toolchain/action.yml defines cross-platform build environment.

🪤Traps & gotchas

Cross-platform native linking: CMake build artifacts from C++ must be copied to language-specific SDK directories (see .idea config '04_Copy_Native_Libs.xml') before language-specific builds; missing this breaks Android/iOS builds. Secret scanning: .gitleaks.toml is active; API keys or model weights in code will be caught pre-commit. Package resolution: Package.resolved and .yarnrc.yml suggest locked dependency versions; upgrading @mlc-ai/web-llm or @huggingface/transformers without testing can break inference. Model weight management: Not visible in file list but critical: LLM weights (Llama, Mistral) are large; download/caching logic likely requires min disk space and network handling.

💡Concepts to learn

  • Cross-platform JNI/FFI bridging — RunAnywhere's core C++ engine is called from Swift (via C interop), Kotlin (via JNI), TypeScript (via Node Native Addons), and Dart (via FFI); understanding how language boundaries work is essential to debugging runtime errors
  • Model quantization & weight optimization — To run models on-device (mobile, embedded), weights must be quantized (int8, fp16) and serialized efficiently; RunAnywhere likely uses ONNX or similar intermediate format to target multiple hardware backends
  • CMake cross-compilation for embedded targets — RunAnywhere builds a single C++ codebase for iOS ARM64, Android ARM64/x86, macOS, and Linux; .github/actions/setup-toolchain/ orchestrates architecture-specific compilation flags and SDK paths
  • ONNX Runtime optimization — Many on-device AI frameworks (including likely RunAnywhere's inference engine) use ONNX Runtime as the execution backend for cross-platform portability and operator fusion
  • Language-specific package distribution (CocoaPods, Maven Central, npm) — RunAnywhere ships via multiple package managers; understanding versioning, binary artifacts (native .xcframework, .aar, .node modules), and transitive dependencies is critical for release and compatibility
  • WebAssembly & WebLLM for browser inference — The TypeScript/web module (with @mlc-ai/web-llm dependency) compiles models to WebAssembly for in-browser LLM inference; understanding WASM memory constraints and JS bindings is necessary for web feature development
  • GitHub Actions matrix builds — pr-build.yml and release.yml likely use matrix strategies to build across multiple OS/platform combinations simultaneously; modifying CI requires understanding job dependencies and artifact passing
  • ollama/ollama — Alternative on-device LLM runtime; RunAnywhere is SDK-first (embeds in apps), Ollama is server-first (standalone binary)
  • mlc-ai/mlc-llm — Upstream compiler infrastructure used by RunAnywhere for optimizing LLM inference across platforms
  • ggerganov/llama.cpp — Reference C++ implementation of Llama inference; likely architectural inspiration for RunAnywhere's core engine
  • openai/whisper — Original speech-to-text model that RunAnywhere wraps and optimizes for on-device deployment
  • huggingface/transformers — Upstream model library; RunAnywhere's TypeScript dependencies (@huggingface/transformers) enable browser-based inference

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add CI workflow for local-browser Chrome extension builds and testing

The repo has a Chrome extension (local-browser) with Vite/TypeScript setup but no dedicated GitHub Actions workflow to validate builds. Currently only pr-build.yml exists but it's unclear if it covers the extension. Adding a specific workflow would catch build failures, TypeScript errors, and bundle size regressions early. This is critical since WebLLM and transformers.js have large dependencies.

  • [ ] Create .github/workflows/extension-build.yml that runs on PRs affecting local-browser/ directory
  • [ ] Add steps to install dependencies using yarn, run 'yarn build' and 'yarn preview', and verify artifact output
  • [ ] Add TypeScript type-checking step ('tsc --noEmit') to catch type errors before merge
  • [ ] Document in CONTRIBUTING.md that extension changes should pass this workflow

Add end-to-end test suite for WebLLM integration in local-browser extension

The local-browser extension depends on @mlc-ai/web-llm v0.2.78 and @huggingface/transformers v3.8.1 but there are no visible test files. Without tests, regressions in model loading, inference, or token streaming could reach production. A basic test suite would validate core functionality: model initialization, prompt inference, and error handling.

  • [ ] Create local-browser/tests/ directory structure
  • [ ] Add tests using Vitest (lightweight, Vite-native) for WebLLM model loading and inference mocking
  • [ ] Create local-browser/tests/llm-integration.test.ts to verify model initialization and basic inference flows
  • [ ] Update package.json scripts to include 'test' and 'test:watch' commands
  • [ ] Integrate test step into the new extension-build.yml workflow from PR #1

Create platform-specific SDK documentation index and troubleshooting guide

The repo supports multiple platforms (iOS via App Store, Android via Google Play, Swift, Kotlin, IntelliJ plugin) evident from .idea/runConfigurations but there's no centralized SDK documentation index. The existing CONTRIBUTING.md and AGENTS.md don't map which SDKs exist, their status, or common setup issues. New contributors can't easily discover or navigate platform-specific guides.

  • [ ] Create docs/PLATFORMS.md listing each SDK (Swift, Kotlin, C++, Web/Chrome extension) with status, entry point, and quick-start link
  • [ ] Create docs/TROUBLESHOOTING.md covering common build issues for each platform (e.g., NDK setup for Kotlin, Swift deployment, Vite config for extension)
  • [ ] Add platform badges/status table in main README.md pointing to docs/PLATFORMS.md
  • [ ] Link PLATFORMS.md from CONTRIBUTING.md under 'Getting Started by Platform' section

🌿Good first issues

  • Add TypeScript unit tests for the WebLLM integration layer (transformers/web-llm imports in package.json are present but test coverage not visible in file list); create tests/ directory with Jest config and test the inference pipeline for the Chrome extension.
  • Document the CMake cross-compilation flags for iOS/Android in a new BUILDING.md file; reference .github/actions/setup-toolchain/action.yml to extract actual compiler flags and explain target-specific build options for new contributors.
  • Create example code snippets or a quickstart guide for each SDK language (Swift, Kotlin, TypeScript, Dart) showing 'Hello World' LLM inference; place in a new examples/<language>/quickstart/ directory with runnable sample code for each platform.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 094d569 — Merge pull request #491 from RunanywhereAI/smonga/post-release-v0.19.13-checksums (sanchitmonga22)
  • 3ebae86 — fix(spm): sync Package.swift checksums to v0.19.13 binaries (sanchitmonga22)
  • 80386a9 — Merge pull request #490 from RunanywhereAI/smonga/release-v0.19.13 (sanchitmonga22)
  • 5c52ae0 — chore(release): v0.19.13 + fix sync-checksums.sh cross-target contamination (sanchitmonga22)
  • 1ee8e02 — ci: trigger CodeQL on main after v0.19.12 is published (sanchitmonga22)
  • 28e517e — Merge pull request #489 from RunanywhereAI/smonga/release-v0.19.12 (sanchitmonga22)
  • cf53724 — fix: align Flutter sub-package runanywhere dep constraint with release version (sanchitmonga22)
  • cb681dd — chore(release): v0.19.12 — remove obsolete ONNXRuntime binary targets (sanchitmonga22)
  • 78209e2 — ci: trigger CodeQL on main now that v0.19.11 release exists (sanchitmonga22)
  • c5c1d59 — Merge pull request #488 from RunanywhereAI/smonga/fix-zlib-race-and-kotlin-snapshot-version (sanchitmonga22)

🔒Security observations

The RunAnywhere SDK codebase demonstrates a reasonable security foundation with gitleaks integration and responsible disclosure policy. However, critical gaps exist: (1) Significantly outdated dependencies (@mlc-ai/web-llm,

  • High · Outdated @mlc-ai/web-llm Dependency — package.json - dependencies. The package uses @mlc-ai/web-llm version ^0.2.78, which is significantly outdated. The current stable version is much higher (3.x+), indicating potential security patches, bug fixes, and performance improvements have been missed. This is particularly critical for an AI toolkit handling model execution. Fix: Update @mlc-ai/web-llm to the latest stable version. Run 'npm audit' and address any reported vulnerabilities. Implement a regular dependency update schedule.
  • High · Outdated TypeScript Dependency — package.json - devDependencies. TypeScript is pinned to version ^5.6.3, which is outdated. The latest versions include important type safety improvements and security-related fixes. For a production-ready toolkit, using outdated TypeScript increases the risk of type-related vulnerabilities slipping through. Fix: Update TypeScript to the latest stable version. Re-run type checking across the codebase to ensure compatibility.
  • Medium · Vite Configuration Not Visible — vite.config.ts/js (missing from provided structure). The vite.config.ts or vite.config.js file is not included in the provided file structure. Without visibility into Vite configuration, potential security issues like missing CSP headers, improper source map handling in production, or insecure build configurations cannot be assessed. Fix: Ensure vite.config.ts includes: proper build optimization, CSP headers, secure source map settings (disabled in production), and CORS configuration. Share configuration for review.
  • Medium · Chrome Extension Permissions Not Visible — manifest.json (missing from provided structure). The manifest.json file for the Chrome extension (local-browser) is not provided. Chrome extensions require careful permission management. Overly broad permissions could allow the extension to access sensitive user data or perform unintended actions. Fix: Ensure manifest.json follows the principle of least privilege: request only necessary permissions (e.g., activeTab, scripting, webRequest). Document why each permission is needed.
  • Medium · Missing Dependency Integrity Checks — package.json / .yarnrc.yml configuration. No lock file format is specified (package-lock.json or yarn.lock). The presence of .yarnrc.yml suggests Yarn usage, but without visibility of lock file configuration, there's risk of inconsistent dependency resolution across environments. Fix: Ensure lock file is committed to version control. Use 'yarn --frozen-lockfile' in CI/CD pipelines. Regularly audit dependencies with 'yarn audit'.
  • Low · Gitleaks Configuration Present — .gitleaks.toml. The .gitleaks.toml file is present, which is good for secret scanning. However, the configuration content is not visible. If rules are not comprehensive, secrets could still be accidentally committed. Fix: Review and enhance .gitleaks.toml to include patterns for: API keys, private keys, database credentials, tokens, and project-specific secret patterns. Ensure pre-commit hooks enforce gitleaks scanning.
  • Low · IDE Configuration Committed — .idea/ directory. The .idea directory (IntelliJ IDEA configuration) is committed to version control. While generally low risk, this can expose local paths, configured JDK versions, and run configurations that might reveal development environment details. Fix: Add .idea/ to .gitignore except for essential shared configurations. Consider using IDE-agnostic configuration management for CI/CD pipelines.
  • Low · Missing CODEOWNERS File — .github/CODEOWNERS (missing). No CODEOWNERS file is visible in the GitHub directory structure. This makes it difficult to enforce security reviews for critical components, particularly for AI model handling and native library code. Fix: Create a CODEOWNERS file specifying required reviewers for: native libraries, model handling code, security-critical paths, and documentation.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Mixed signals · RunanywhereAI/runanywhere-sdks — RepoPilot