RepoPilotOpen in app →

Zackriya-Solutions/meetily

Privacy first, AI meeting assistant with 4x faster Parakeet/Whisper live transcription, speaker diarization, and Ollama summarization built on Rust. 100% local processing. no cloud required. Meetily (Meetly Ai - https://meetily.ai) is the #1 Self-hosted, Open-source Ai meeting note taker for macOS & Windows.

Healthy

Healthy across the board

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit today
  • 7 active contributors
  • MIT licensed
Show all 6 evidence items →
  • CI configured
  • Tests present
  • Concentrated ownership — top contributor handles 67% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/zackriya-solutions/meetily)](https://repopilot.app/r/zackriya-solutions/meetily)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/zackriya-solutions/meetily on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: Zackriya-Solutions/meetily

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/Zackriya-Solutions/meetily shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit today
  • 7 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Concentrated ownership — top contributor handles 67% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live Zackriya-Solutions/meetily repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/Zackriya-Solutions/meetily.

What it runs against: a local clone of Zackriya-Solutions/meetily — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in Zackriya-Solutions/meetily | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 30 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>Zackriya-Solutions/meetily</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of Zackriya-Solutions/meetily. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/Zackriya-Solutions/meetily.git
#   cd meetily
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of Zackriya-Solutions/meetily and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "Zackriya-Solutions/meetily(\\.git)?\\b" \\
  && ok "origin remote is Zackriya-Solutions/meetily" \\
  || miss "origin remote is not Zackriya-Solutions/meetily (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "frontend/src-tauri/src/main.rs" \\
  && ok "frontend/src-tauri/src/main.rs" \\
  || miss "missing critical file: frontend/src-tauri/src/main.rs"
test -f "backend/app/main.py" \\
  && ok "backend/app/main.py" \\
  || miss "missing critical file: backend/app/main.py"
test -f "backend/whisper-custom/server/server.cpp" \\
  && ok "backend/whisper-custom/server/server.cpp" \\
  || miss "missing critical file: backend/whisper-custom/server/server.cpp"
test -f "Cargo.toml" \\
  && ok "Cargo.toml" \\
  || miss "missing critical file: Cargo.toml"
test -f "backend/requirements.txt" \\
  && ok "backend/requirements.txt" \\
  || miss "missing critical file: backend/requirements.txt"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 30 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~0d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/Zackriya-Solutions/meetily"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Meetily is a privacy-first, self-hosted AI meeting assistant built in Rust that captures, transcribes, and summarizes meetings entirely on local infrastructure. It combines Parakeet/Whisper live transcription (4x faster than standard Whisper), speaker diarization, and Ollama-powered summarization—all running locally with zero cloud dependency, ensuring complete data sovereignty for enterprises. Monorepo with frontend (frontend/src-tauri/ — Tauri desktop app for macOS/Windows), Rust workspace (llama-helper), and Python backend (backend/ with Flask app in backend/app/main.py handling transcription via backend/app/transcript_processor.py). CI/CD orchestrated through .github/workflows/ with platform-specific Docker builds (backend/Dockerfile.*). Desktop UI drives backend API calls for local inference.

👥Who it's for

Enterprise IT teams, compliance officers, and privacy-conscious organizations who need meeting intelligence (transcription + summaries + speaker attribution) but cannot send audio to cloud services due to regulatory requirements (GDPR, HIPAA) or IP sensitivity. Also appeals to open-source contributors interested in real-time audio processing and local LLM integration.

🌱Maturity & risk

Actively developed with pre-release status (badges show GitHub releases exist). Multi-platform CI/CD pipelines for macOS/Windows/Linux (.github/workflows/) indicate professional infrastructure. TypeScript + Rust + Python backend suggests a maturing cross-platform app, though pre-release tag indicates APIs may still evolve. Production-ready for privacy-conscious early adopters; not yet 1.0.

Rust/TypeScript/C++/Python polyglot codebase increases onboarding friction and toolchain complexity. Heavy ML/audio dependencies (Whisper, Parakeet, Ollama) create vendor lock-in and require specific GPU/CPU configurations. Single organization (Zackriya-Solutions) stewardship and pre-release status mean breaking changes are possible. Windows/macOS focus limits Linux support despite workflows existing.

Active areas of work

Active development visible in build workflows for multiple platforms (.github/workflows/build-*.yml). Workspaces suggest ongoing Rust refactoring (workspace resolver 2.0). Python backend includes transcript_processor.py and schema_validator.py, indicating iteration on ML pipeline. Recent template additions (.github/ISSUE_TEMPLATE/) and CONTRIBUTING.md suggest growing contributor guidance.

🚀Get running

Clone: git clone https://github.com/Zackriya-Solutions/meetily.git && cd meetily. For backend: cd backend && pip install -r requirements.txt (Python 3.x required). For frontend: cd frontend/src-tauri && npm install (requires Node.js). See backend/README.md and .github/workflows/ for platform-specific builds (macOS requires Xcode, Windows requires MSVC).

Daily commands: Backend: cd backend && python app/main.py (starts Flask on localhost; requires Ollama running separately). Frontend: cd frontend/src-tauri && npm run tauri dev (launches Tauri dev window with hot reload). Full stack: Use Docker Compose: cd backend && docker-compose up (see backend/docker-compose.yml). Whisper/Parakeet models auto-download on first run (backend/download-ggml-model.sh script provided).

🗺️Map of the codebase

  • frontend/src-tauri/src/main.rs — Entry point for the Tauri desktop application; all contributors must understand the window initialization, IPC command routing, and lifecycle management.
  • backend/app/main.py — Core Python FastAPI server handling transcription, summarization workflows, and database operations; critical for backend feature changes.
  • backend/whisper-custom/server/server.cpp — Custom Whisper HTTP server implementation providing 4x faster transcription; foundational for audio processing pipeline.
  • Cargo.toml — Workspace root configuration managing Rust workspace members (Tauri frontend, llama-helper); must be updated for new Rust dependencies.
  • backend/requirements.txt — Python backend dependencies including FastAPI, database drivers, and ML model loaders; critical for environment setup.
  • backend/docker-compose.yml — Orchestration of backend services (Whisper server, Python API, database); essential for understanding local development deployment.
  • docs/architecture.md — High-level system architecture documenting data flow between Tauri frontend, Python backend, and ML services.

🛠️How to make changes

Add a New Meeting Endpoint

  1. Define request/response schema in backend/app/schema_validator.py with Pydantic models (backend/app/schema_validator.py)
  2. Add FastAPI route handler in backend/app/main.py with @app.post or @app.get decorator (backend/app/main.py)
  3. Implement database query in backend/app/db.py if persisting meeting data (backend/app/db.py)
  4. Expose IPC command from Tauri frontend via invoke!() macro in frontend/src-tauri/src/main.rs (frontend/src-tauri/src/main.rs)

Enable GPU Acceleration

  1. Review GPU setup prerequisites and CUDA/Metal requirements in docs/GPU_ACCELERATION.md (docs/GPU_ACCELERATION.md)
  2. Build using GPU Docker container instead of CPU variant using backend/Dockerfile.server-gpu (backend/Dockerfile.server-gpu)
  3. Update environment variables in backend/.env or docker-compose.yml to enable GPU model loading (backend/docker-compose.yml)
  4. Run backend/build-docker.sh with GPU flag or use native build scripts for macOS Metal acceleration (backend/build-docker.sh)

Customize Summarization Prompt

  1. Modify Ollama model selection and prompt template in backend/app/transcript_processor.py (backend/app/transcript_processor.py)
  2. Test workflow changes using backend/examples/run_summary_workflow.py (backend/examples/run_summary_workflow.py)
  3. Adjust model inference parameters in llama-helper/src/main.rs if using Rust LLM wrapper (llama-helper/src/main.rs)

Add a New UI Panel in Desktop App

  1. Define new Tauri IPC command handler in frontend/src-tauri/src/main.rs with #[tauri::command] (frontend/src-tauri/src/main.rs)
  2. Ensure backend API endpoint exists in backend/app/main.py to support the UI request (backend/app/main.py)
  3. Build and run Tauri dev server with cargo tauri dev to test UI integration with backend (frontend/README.md)

🔧Why these technologies

  • Tauri (Rust + TypeScript) — Lightweight, secure native desktop wrapper eliminating Electron overhead; native window/IPC support for macOS/Windows; Rust memory safety for system-level operations.
  • Custom Whisper C++ Server — 4x faster transcription than Whisper.py via hand-optimized C++ inference and vectorization; local processing avoids cloud API latency and privacy leaks.
  • FastAPI (Python) — Rapid API development with async/await for I/O-bound transcription and summarization workflows; built-in OpenAPI docs; seamless integration with Python ML ecosystem.
  • Ollama (Local LLM) — Privacy-first summarization without cloud API calls; runs on consumer hardware (CPU/GPU); supports multiple model families (Llama, Mistral, etc.).
  • Docker & Docker Compose — Reproducible development environment across Linux/macOS/Windows; separate CPU/GPU container variants; simplifies dependency management for Whisper + Python + database.
  • SQLite/PostgreSQL — Local-first data persistence with optional PostgreSQL for multi-user deployments; schema-based meeting and transcript storage.

⚖️Trade-offs already made

  • 100% local processing (no cloud APIs)

    • Why: Privacy compliance and user data sovereignty; eliminates reliance on external services.
    • Consequence: Requires significant local compute (GPU recommended for real-time performance); larger disk footprint for model files (~2-4GB per model).
  • Custom Whisper C++ instead of maintained library

    • Why: Achieve 4x speedup through hand-tuned inference; avoid dependency version churn.
    • Consequence: Maintenance burden for C++ server; harder to adopt upstream Whisper improvements without manual porting.
  • Tauri + TypeScript frontend vs Electron

    • Why: Smaller binary size, better memory efficiency, security model.
    • Consequence: Smaller

🪤Traps & gotchas

Model downloads: First run auto-fetches Whisper/Parakeet models (gigabytes); requires internet on first launch. Ollama dependency: Summarization requires running Ollama service separately (backend/docker-compose.yml handles this in Docker). GPU vs CPU: Different Dockerfiles (Dockerfile.server-cpu vs Dockerfile.server-gpu) with different dependency chains; wrong one causes silent performance degradation. Platform specifics: macOS requires code signing (Tauri); Windows needs MSVC; Linux support unclear despite build workflow existing. Audio input permissions: macOS/Windows require microphone permission dialogs before Tauri app can record. CORS debugging: backend/debug_cors.py exists, suggesting cross-origin request issues during local dev are known.

🏗️Architecture

💡Concepts to learn

  • Speaker Diarization — Meetily's core feature attribution—identifying who spoke when in a meeting recording—relies on this audio processing technique to separate and label speakers in transcripts
  • Quantization (Model Compression) — Parakeet's 4x speed improvement over Whisper likely uses quantized model weights (int8/float16) to run on-device without GPU; understanding this explains performance/accuracy tradeoffs
  • Monorepo Workspace (Cargo) — Meetily's Cargo.toml defines a workspace with multiple members (frontend/src-tauri, llama-helper); understanding workspace resolution and dependency sharing is essential for cross-crate changes
  • IPC (Inter-Process Communication) via REST — Tauri desktop frontend communicates with Flask backend using HTTP REST API; no shared memory—all state passes as JSON, requiring careful serialization and error handling
  • Stream Processing (Audio Chunking) — Live transcription via Whisper/Parakeet processes audio in fixed chunks rather than waiting for complete files; affects latency, buffering, and context window handling in transcript_processor.py
  • GDPR Compliance (Data Sovereignty) — Meetily's entire architecture (local-only processing, no cloud uploads) is designed to satisfy GDPR's data localization requirements; this shapes deployment options and API design decisions
  • Docker Multi-Stage Builds — Dockerfile.server-cpu, Dockerfile.server-gpu, and Dockerfile.server-macos use separate stages for compilation vs runtime; understanding this is key to debugging build failures and optimizing image size
  • openai/whisper — Upstream transcription model that Meetily's Parakeet variant optimizes for speed; essential for understanding base transcription tech
  • pyannote/pyannote-audio — Standard open-source speaker diarization library likely underlying Meetily's speaker attribution feature
  • ollama/ollama — Local LLM runtime that powers Meetily's on-device summarization without cloud dependency
  • tauri-apps/tauri — Desktop application framework used for Meetily's macOS/Windows UI layer instead of Electron
  • huggingface/transformers — ML framework likely providing model loading and inference utilities for Whisper/Parakeet in the Python backend

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add integration tests for Whisper transcription server (backend/whisper-custom/server/server.cpp)

The custom Whisper server implementation lacks automated tests. Given that transcription is the core feature of Meetily, adding C++ integration tests would catch regressions in the HTTP server, verify audio processing, and ensure transcription output quality across different audio formats. This is critical for a privacy-first tool where accuracy is paramount.

  • [ ] Create backend/whisper-custom/server/tests/ directory with CMake test configuration
  • [ ] Add unit tests for server.cpp HTTP endpoint handling (POST /transcribe, response formatting)
  • [ ] Add integration tests with sample audio files (various formats/sample rates) to verify transcription output
  • [ ] Create GitHub Actions workflow (.github/workflows/test-whisper-server.yml) to run C++ tests on Linux/macOS/Windows
  • [ ] Document test execution in backend/SCRIPTS_DOCUMENTATION.md

Add Python backend API contract tests (backend/app/main.py & schema_validator.py)

The backend has schema_validator.py and multiple entry points (main.py, transcript_processor.py) but no visible pytest suite. Adding contract/validation tests would ensure the FastAPI endpoints match their documentation (backend/API_DOCUMENTATION.md), prevent breaking changes during refactoring, and make contributor PRs safer. Critical for a tool with multiple backend components.

  • [ ] Create backend/tests/ directory with conftest.py for pytest fixtures (mock Ollama, test database)
  • [ ] Add tests/test_main.py for FastAPI endpoint validation against backend/API_DOCUMENTATION.md
  • [ ] Add tests/test_schema_validator.py for transcript schema validation edge cases
  • [ ] Add tests/test_transcript_processor.py for summarization/processing workflows with mock Ollama
  • [ ] Create GitHub Actions workflow (.github/workflows/test-backend.yml) to run pytest on Python 3.9+
  • [ ] Update backend/README.md with 'Running Tests' section

Create platform-specific troubleshooting documentation (docs/TROUBLESHOOTING_*.md)

The repo has multiple build workflows for macOS, Windows, and Linux, plus Docker support, but no visible troubleshooting guide. Given the complexity (Rust frontend, Python backend, Whisper C++ server, Ollama integration), new contributors and users hit platform-specific issues. Adding platform-specific docs would reduce support burden and improve onboarding. The BLUETOOTH_PLAYBACK_NOTICE.md suggests known issues exist.

  • [ ] Create docs/TROUBLESHOOTING_MACOS.md covering M1/M2 architecture issues, Rust compilation, audio permission/Bluetooth playback (reference BLUETOOTH_PLAYBACK_NOTICE.md)
  • [ ] Create docs/TROUBLESHOOTING_WINDOWS.md covering Visual Studio build tools, CUDA setup for GPU transcription, PowerShell script execution policies
  • [ ] Create docs/TROUBLESHOOTING_LINUX.md covering ALSA/PulseAudio audio setup, GGML model download issues, Docker volume mounts
  • [ ] Add troubleshooting section to docs/DOCKER_SETUP.md for container-specific issues (GPU passthrough, volume permissions)
  • [ ] Link all troubleshooting docs from main README.md in a 'Common Issues' section

🌿Good first issues

  • Add pytest integration tests for backend/app/transcript_processor.py covering diarization and summarization with mock Whisper/Ollama responses—currently no test files visible for core pipeline logic: Medium
  • Document exact Ollama model requirements and download process in backend/README.md with code example (e.g., 'ollama pull mistral' before starting Flask)—users may fail on first run without this: Low
  • Add Windows batch script wrapper (analogous to backend/clean_start_backend.sh) for backend/clean_start_backend.cmd to handle Python venv setup and dependency caching, reducing setup friction: Low

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 91b0c09 — Merge pull request #381 from Zackriya-Solutions/devtest (sujithatzackriya)
  • edf71c0 — Merge pull request #378 from Zackriya-Solutions/fix/model-unload-fixes (sujithatzackriya)
  • afe0e72 — chore(readme): add import & enhance demo GIF (sujithatzackriya)
  • 7d9b91b — chore(readme): fix broken screenshot and add contributor attribution (sujithatzackriya)
  • 0fbeb6e — fix: replace unstable floor_char_boundary with stable equivalent (safvanatzack)
  • 681a8bb — chore(readme): import and enhance feature (safvanatzack)
  • 544acaa — fix: unload transcription engine after batch jobs to free memory (safvanatzack)
  • 2e33690 — Merge pull request #372 from Zackriya-Solutions/devtest (sujithatzackriya)
  • 3009f71 — Merge pull request #375 from Zackriya-Solutions/fix/v3.0.0-pre-merge-fixes (sujithatzackriya)
  • c984717 — fix: resolve remaining PR #372 review bugs (sujithatzackriya)

🔒Security observations

  • High · Hardcoded Environment Configuration File — backend/temp.env. The file 'backend/temp.env' appears to be a temporary environment configuration file that may contain sensitive credentials. Temporary environment files should never be committed to version control as they often contain API keys, database passwords, or other secrets. Fix: Remove 'backend/temp.env' from the repository immediately. Add '*.env' and 'temp.env' to .gitignore. Use .env.example as a template for required variables instead.
  • High · Potential SQL Injection Risk via Schema Validation — backend/app/db.py, backend/app/schema_validator.py, backend/app/transcript_processor.py. The file 'backend/app/schema_validator.py' suggests dynamic schema validation. Without seeing the implementation, there's a high risk that user input could be used to construct SQL queries without proper parameterization, especially in 'transcript_processor.py' and 'db.py'. Fix: Use parameterized queries (prepared statements) exclusively. Never concatenate user input into SQL queries. Implement input validation and sanitization for all user-supplied data.
  • High · Docker CORS Debug Script in Production — backend/debug_cors.py. The file 'backend/debug_cors.py' suggests debugging CORS issues. Debug scripts should not be present in production deployments as they may expose endpoints or disable security headers that could lead to exploitation. Fix: Remove debug scripts from production builds. Implement proper CORS configuration in main.py with strict origin whitelisting. Use environment variables to control CORS settings.
  • Medium · Outdated Rust Version Specification — Cargo.toml. The Cargo.toml specifies 'rust-version = 1.77', which may be outdated depending on when this analysis is performed. Outdated Rust versions may contain known security vulnerabilities in the standard library and compiler. Fix: Regularly update to the latest stable Rust version. Implement automated dependency scanning via tools like 'cargo audit' in CI/CD pipelines (GitHub Actions already configured).
  • Medium · No Version Pinning on Critical Dependencies — Cargo.toml (workspace.dependencies). Dependencies like 'tokio', 'serde', and 'anyhow' use semantic versioning without explicit patch version pinning. This could allow minor/patch updates with unexpected behavioral changes or security issues. Fix: Consider using Cargo.lock for deterministic builds in production. For critical dependencies, use more restrictive version constraints (e.g., '1.32.0' instead of '1.32'). Regularly audit dependencies with 'cargo audit'.
  • Medium · Multiple Docker Images Without Security Scanning — backend/Dockerfile.app, backend/Dockerfile.server-cpu, backend/Dockerfile.server-gpu, backend/Dockerfile.server-macos. Multiple Dockerfiles exist (server-cpu, server-gpu, server-macos, app) but no evidence of security scanning or base image verification. Docker images may contain vulnerabilities in base layers or dependencies. Fix: Use minimal base images (alpine/distroless). Implement Docker image scanning in CI/CD (e.g., Trivy, Snyk). Pin base image versions explicitly. Scan images before release.
  • Medium · Potential Insecure Deserialization — backend/app/main.py, backend/app/transcript_processor.py, llama-helper (Rust). The codebase uses serde for JSON deserialization across Python and Rust components. Without proper validation, malformed or malicious JSON could lead to deserialization attacks, especially in 'transcript_processor.py'. Fix: Validate all deserialized data against strict schemas. Use serde validation attributes. Implement size limits on incoming requests. Use deny-lists for unexpected fields.
  • Medium · Windows PowerShell Scripts Without Signing — backend/build-docker.ps1, backend/clean_start_backend.cmd, backend/install_dependancies_for_windows.ps1, backend/setup-db.ps. Multiple PowerShell scripts (.ps1 files) exist but are unlikely to be signed. Unsigned scripts may be exploited for arbitrary code execution if execution policies are not properly configured. Fix: undefined

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · Zackriya-Solutions/meetily — RepoPilot