RepoPilotOpen in app →

Usagi-org/ai-goofish-monitor

基于 Playwright 和AI实现的闲鱼多任务实时/定时监控与智能分析系统,配备了功能完善的后台管理UI。帮助用户从闲鱼海量商品中,找到心仪产品。

Healthy

Healthy across all four use cases

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 1d ago
  • 6 active contributors
  • MIT licensed
Show all 6 evidence items →
  • CI configured
  • Tests present
  • Single-maintainer risk — top contributor 91% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/usagi-org/ai-goofish-monitor)](https://repopilot.app/r/usagi-org/ai-goofish-monitor)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/usagi-org/ai-goofish-monitor on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: Usagi-org/ai-goofish-monitor

Generated by RepoPilot · 2026-05-07 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/Usagi-org/ai-goofish-monitor shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 1d ago
  • 6 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Single-maintainer risk — top contributor 91% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live Usagi-org/ai-goofish-monitor repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/Usagi-org/ai-goofish-monitor.

What it runs against: a local clone of Usagi-org/ai-goofish-monitor — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in Usagi-org/ai-goofish-monitor | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 31 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>Usagi-org/ai-goofish-monitor</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of Usagi-org/ai-goofish-monitor. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/Usagi-org/ai-goofish-monitor.git
#   cd ai-goofish-monitor
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of Usagi-org/ai-goofish-monitor and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "Usagi-org/ai-goofish-monitor(\\.git)?\\b" \\
  && ok "origin remote is Usagi-org/ai-goofish-monitor" \\
  || miss "origin remote is not Usagi-org/ai-goofish-monitor (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "src/app.py" \\
  && ok "src/app.py" \\
  || miss "missing critical file: src/app.py"
test -f "src/scraper.py" \\
  && ok "src/scraper.py" \\
  || miss "missing critical file: src/scraper.py"
test -f "src/services/ai_service.py" \\
  && ok "src/services/ai_service.py" \\
  || miss "missing critical file: src/services/ai_service.py"
test -f "src/domain/models/task.py" \\
  && ok "src/domain/models/task.py" \\
  || miss "missing critical file: src/domain/models/task.py"
test -f "src/infrastructure/persistence/sqlite_task_repository.py" \\
  && ok "src/infrastructure/persistence/sqlite_task_repository.py" \\
  || miss "missing critical file: src/infrastructure/persistence/sqlite_task_repository.py"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 31 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~1d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/Usagi-org/ai-goofish-monitor"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

A Playwright-based intelligent monitoring system for Xianyu (Alibaba's secondhand marketplace) that crawls product listings in real-time or on schedule, uses multimodal AI (vision + text) to analyze items against user-defined criteria, and surfaces matching products via FastAPI web UI and multiple notification channels. It automates the tedious task of finding desired items in massive product catalogs by combining web automation with LLM-powered analysis. Monolith with three tiers: Python FastAPI backend (src/api/routes/ handles tasks, accounts, results, websocket; src/core/ has cron scheduler; src/ai_handler.py wraps OpenAI calls), Vue 3 frontend (referenced as 228341 bytes but code not in listed paths—likely compiled into static/), and Playwright crawler (spider_v2.py is the main scraper). SQLite database (data/app.sqlite3) stores tasks, results, prices; filesystem (state/, prompts/, logs/, images/) holds login cookies, task templates, and product photos.

👥Who it's for

Chinese Xianyu resellers and bargain hunters who want to automatically monitor specific product categories (e.g., used MacBooks under ¥3000 in good condition) without manually refreshing the app every hour. Developers can extend it with new notification backends or AI prompt strategies.

🌱Maturity & risk

Actively developed—includes CI/CD workflows (GitHub Actions for Docker image builds), comprehensive Vue + FastAPI frontend, SQLite persistence, and multi-account support. The presence of Dockerfile.base and docker-compose.yaml with persistent volume mounts suggests production deployment is intended. However, single maintainer (Usagi-org) and lack of visible test suite beyond pytest imports indicate alpha-to-beta maturity rather than battle-tested stability.

Moderate risk: dependency on Playwright (browser automation licensing changes could impact deployment), OPENAI_API_KEY requirement creates vendor lock-in to OpenAI-compatible APIs, and login-state cookie rotation (state/acc_*.json) is brittle against Xianyu's anti-bot measures. No visible comprehensive test coverage; CI builds Docker images but unclear if integration tests run. Single point of failure on AI API availability blocks task creation and analysis.

Active areas of work

Active development on Docker deployment and database migration—codebase recently migrated from file-based (config.json, jsonl/) to SQLite with backward-compatibility import logic. GitHub Actions workflows trigger on push for Docker image builds. Chrome extension (chrome-extension/) exists for extracting Xianyu login state, suggesting ongoing UX refinement.

🚀Get running

git clone https://github.com/Usagi-org/ai-goofish-monitor && cd ai-goofish-monitor
cp .env.example .env
# Edit .env to add OPENAI_API_KEY, OPENAI_BASE_URL, OPENAI_MODEL_NAME
docker compose up -d
docker compose logs -f app
# Open http://127.0.0.1:8000 in browser (default login: admin/admin123)

For local development (non-Docker): pip install -r requirements.txt && python src/app.py after .env setup.

Daily commands: Docker (recommended): docker compose up -d after .env setup. Local dev: pip install -r requirements.txt && python src/app.py (requires Python 3.8+, Playwright browsers auto-installed via playwright install). Web UI: http://localhost:8000 (or port from SERVER_PORT env var).

🗺️Map of the codebase

  • src/app.py — FastAPI application entry point and root route configuration; every contributor must understand the overall application structure and middleware setup.
  • src/scraper.py — Core Playwright-based web scraper for Xianyu; the heart of data collection that all monitoring tasks depend on.
  • src/services/ai_service.py — AI integration service for product analysis; orchestrates LLM calls and response parsing across the system.
  • src/domain/models/task.py — Task data model definition; foundational domain entity used by persistence, scheduling, and API layers.
  • src/infrastructure/persistence/sqlite_task_repository.py — Primary persistence layer for task state and results; any data consistency issue cascades across all features.
  • src/services/scheduler_service.py — APScheduler wrapper managing Cron-based task execution; critical for coordinating concurrent monitoring jobs.
  • src/api/routes/websocket.py — WebSocket endpoint for real-time UI updates; bridges backend task execution with frontend monitoring dashboard.

🛠️How to make changes

Add a New API Endpoint

  1. Create a new router file in src/api/routes/ or add a new route function to an existing one, using FastAPI's @router.post/@router.get decorators (src/api/routes/tasks.py)
  2. Define Pydantic request/response models in the same file or a shared models module (src/api/routes/tasks.py)
  3. Inject required services from src/api/dependencies.py or create new service dependencies (src/api/dependencies.py)
  4. Import and register the router in src/app.py under app.include_router() (src/app.py)

Add a New Notification Channel

  1. Create a new client class inheriting from src/infrastructure/external/notification_clients/base.py with async send() method (src/infrastructure/external/notification_clients/base.py)
  2. Register the new client in the factory dictionary in src/infrastructure/external/notification_clients/factory.py (src/infrastructure/external/notification_clients/factory.py)
  3. Add required configuration fields to src/infrastructure/config/settings.py (src/infrastructure/config/settings.py)
  4. Update .env.example with new environment variables for the notification channel (.env.example)

Implement AI-Driven Feature (e.g., Custom Prompt Analysis)

  1. Define business logic in src/services/ai_service.py or create a new service module that uses ai_client (src/services/ai_service.py)
  2. Build prompt templates in src/prompt_utils.py with Jinja2 templating for dynamic content (src/prompt_utils.py)
  3. Parse and validate AI responses in src/services/ai_response_parser.py or ai_request_compat.py (src/services/ai_response_parser.py)
  4. Expose the feature via an API endpoint in src/api/routes/ and wire through dependencies in src/api/dependencies.py (src/api/routes/tasks.py)

Add a Scheduled Task Variant

  1. Define task configuration in src/domain/models/task.py or extend the Task class with new fields (src/domain/models/task.py)
  2. Implement task execution logic in src/services/task_generation_runner.py or a new service (src/services/task_generation_runner.py)
  3. Register the task handler in src/services/scheduler_service.py under the appropriate schedule_* method (src/services/scheduler_service.py)
  4. Optionally add persistence migration in src/infrastructure/persistence/sqlite_bootstrap.py if schema changes are needed (src/infrastructure/persistence/sqlite_bootstrap.py)

🔧Why these technologies

  • Playwright — Headless browser automation required to navigate dynamic Xianyu website with JavaScript rendering and anti-bot protections
  • FastAPI + Uvicorn — Async-native Python web framework enables concurrent task execution and real-time WebSocket updates without blocking
  • OpenAI / LLM APIs — Multi-modal AI analysis of product images/descriptions for intelligent filtering matching user intent
  • APScheduler — In-process task scheduling with Cron support allows flexible periodic monitoring without external job queue infrastructure
  • SQLite — Lightweight, embeddable relational database suitable for single-server deployment with no external DB dependency
  • Pydantic — Schema validation and serialization for API requests/responses and environment configuration with type safety
  • Jinja2 — Template engine for dynamic AI prompt generation based on task-specific criteria and user preferences

⚖️Trade-offs already made

  • In-process scheduling (APScheduler) vs. external job queue (Celery/RQ)
    • Why: Simplifies deployment to single-container Docker setup without requiring Redis/RabbitMQ; suitable for small-to-medium task counts
    • Consequence: Limited horizontal scaling; task state lost on process restart unless pers

🪤Traps & gotchas

OPENAI_API_KEY required—app will not start tasks without a valid key in .env. Playwright browser installation: first run auto-installs Chromium via playwright install, but in Docker it's pre-baked. Login state expires: cookies in state/acc_*.json become stale if Xianyu changes security; no auto-refresh logic visible. Database migration on startup: app attempts one-time import from old config.json + jsonl/ to SQLite; if this fails silently, old tasks may not appear. Port conflicts: default SERVER_PORT=8000; if already in use, docker-compose.yaml must be edited (env var alone won't remap the service). Image cleanup: product images are temporary in images/task_images_<task_name>/ and auto-deleted after task completion; don't rely on them for long-term storage. Single-threaded AI analysis: if OPENAI_API_KEY quota is exhausted or API goes down, all pending AI tasks block indefinitely (no async fallback visible).

🏗️Architecture

💡Concepts to learn

  • Headless Browser Automation (Playwright) — Core to scraping Xianyu's dynamic JavaScript-heavy product pages without triggering anti-bot detection; understanding Playwright's context, cookies, and user-agent rotation is essential to modify spider_v2.py.
  • Multimodal AI (Vision + Text LLM) — The system's differentiator—it sends both product images and text (title, description) to an LLM for semantic understanding; critical to grasp how ai_handler.py constructs multimodal prompts and why the OPENAI_MODEL_NAME must support images.
  • Cron Scheduling & APScheduler — Tasks can run on intervals (e.g., 'every 30 minutes') via cron expressions; src/core/cron_utils.py integrates APScheduler to avoid blocking the FastAPI thread; understanding this is key to adding recurring task features.
  • WebSocket for Real-Time Progress — AI task creation is async; src/api/routes/websocket.py streams progress back to the browser UI without polling; understanding async generators in FastAPI is needed to extend real-time features.
  • SQLite Persistence & ORM (SQLAlchemy) — Recent migration from file-based (jsonl) to SQLite; src/domain/models/ defines schema; understanding ORM relationships is essential to add new fields or optimize query performance for large result sets.
  • Session & Cookie Rotation — Xianyu detects bot traffic; the system stores login cookies per account in state/*.json and rotates them across tasks to avoid IP/account bans; understanding this anti-detection pattern is crucial for avoiding rate-limiting.
  • Pydantic Settings & Environment Variables — Config is driven by .env and parsed via pydantic-settings in src/config.py; understanding this pattern is needed to add new configuration knobs (e.g., timeout values, proxy settings) without hardcoding.
  • apify/crawlee — High-level web scraping framework (alternative to raw Playwright) with built-in scaling, proxy rotation, and session management—useful for extending spider_v2.py into a more robust crawler.
  • langchain-ai/langchain — LLM orchestration library; currently ai_handler.py directly calls OpenAI, but LangChain could abstract multimodal analysis and enable pluggable model backends beyond OpenAI.
  • n8n/n8n — Low-code automation platform with web UI, similar task scheduling + notification goals; shows alternative architecture for no-code users who don't want Python.
  • home-assistant/core — Home Assistant's notification/automation framework is a mature reference for multi-channel alerts (Bark, Telegram, webhooks); could inspire refactoring of notification handling in src/api/routes/.
  • yt-dlp/yt-dlp — Single-file Python tool with similar deployment simplicity (docker, pip, standalone) and ecosystem extension pattern; demonstrates how to structure optional features (proxy, auth, output formats) similar to account/task plugins.

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive unit tests for AI handler and message builder modules

The repo has pytest and pytest-asyncio in dependencies but lacks test coverage for core AI logic. src/ai_handler.py and src/ai_message_builder.py are critical for the AI-driven task system but have no corresponding test files. This would improve reliability of the core AI integration with OpenAI/Claude and catch regressions early.

  • [ ] Create tests/test_ai_handler.py covering ai_handler.py functions (prompt building, response parsing, error handling)
  • [ ] Create tests/test_ai_message_builder.py for message construction with various product data scenarios
  • [ ] Add mock fixtures for OpenAI API responses in tests/conftest.py
  • [ ] Add pytest configuration to pyproject.toml with coverage thresholds
  • [ ] Ensure tests cover both sync and async code paths given pytest-asyncio dependency

Add GitHub Actions workflow for automated Docker image security scanning

The repo has Dockerfile, Dockerfile.base, and Dockerfile.release but no security scanning in the existing .github/workflows (only claude.yml and docker-image.yml are present). Given the system handles user accounts and credentials, scanning for vulnerabilities in base images and dependencies would be valuable before releasing.

  • [ ] Create .github/workflows/container-security.yml using trivy or grype action
  • [ ] Configure scanning to trigger on: push to main, pull requests, and weekly schedule
  • [ ] Add configuration to fail workflow on HIGH/CRITICAL vulnerabilities found in Dockerfile.release
  • [ ] Include dependency scanning for requirements.txt and requirements-runtime.txt
  • [ ] Document security scanning results and how contributors should handle findings in CONTRIBUTING.md

Add integration tests for notification clients and webhook delivery

The repo supports multiple notification channels (ntfy.sh, Bark, Telegram, Webhook) via src/infrastructure/external/notification_clients/ but lacks integration tests. The base.py and bark_client.py files suggest a plugin architecture that should be tested to prevent notification delivery failures in production.

  • [ ] Create tests/integration/test_notification_clients.py with mock HTTP servers for each client type
  • [ ] Add tests for bark_client.py including retry logic and error handling with invalid API URLs
  • [ ] Extend tests to cover remaining notification clients in notification_clients/ directory
  • [ ] Add conftest.py fixtures for mock notification servers (using responses library or httpretty)
  • [ ] Document how to run integration tests in a new TESTING.md and add to CI/CD pipeline

🌿Good first issues

  • Add unit tests for src/ai_handler.py and src/ai_message_builder.py—currently no test files for AI prompt building or OpenAI client interaction are visible; contributors could write pytest cases covering different product categories and prompt templates.
  • Document the Xianyu login state JSON schema in AGENTS.md or a new LOGIN_STATE.md—the Chrome extension exports JSON but the exact fields and lifetime are undocumented, making account debugging hard for new users.
  • Implement a notification health-check endpoint in src/api/routes/settings.py that validates Webhook/Bark/WeChat Work credentials before saving, reducing silent notification failures in production.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • d3c816f — Merge pull request #484 from Usagi-org/dev (dingyufei615)
  • 65fea72 — feat(results): add blacklist rules for result filtering (dingyufei615)
  • a00fb73 — Merge pull request #450 from QuasarNe/fix/zhipu-reasoning-content-support (dingyufei615)
  • 1256146 — Merge pull request #476 from mbC6zC/feat/item-block-filter-persist (dingyufei615)
  • 8228550 — feat: add item block/unblock and persist filter state (mbC6zC)
  • 63fe7cd — Update language links in README_EN.md (dingyufei615)
  • c5dadec — Revise README for project name and feature clarity (dingyufei615)
  • 0cebf1d — Merge pull request #471 from Usagi-org/dev (dingyufei615)
  • 80e0af6 — Merge pull request #463 from lennondotw/fix/httpx-no-proxy-ipv6-cidr (dingyufei615)
  • b40b020 — Merge pull request #464 from lennondotw/fix/response-format-fallback-detection (dingyufei615)

🔒Security observations

  • Critical · Hardcoded Default Credentials — .env.example. The .env.example file contains default hardcoded credentials (WEB_USERNAME=admin, WEB_PASSWORD=admin123). Users may deploy with these defaults, allowing unauthorized access to the web management interface which controls task configuration, account management, and AI settings. Fix: Remove default credentials from .env.example. Force users to set strong, unique credentials during initial setup. Implement credential validation that rejects weak passwords. Add startup warnings if defaults are detected.
  • Critical · Exposed Sensitive Configuration in Docker Volumes — docker-compose.yaml. The docker-compose.yaml mounts .env file directly into the container, and multiple volumes expose sensitive data (config.json, prompts, logs, images). If the container is compromised, attackers gain access to API keys, credentials, and all historical data. Fix: Use Docker secrets instead of .env file mounting for sensitive data. Implement proper file permissions (600) on sensitive config files. Consider encrypting sensitive configuration at rest. Restrict volume access and implement log rotation.
  • Critical · API Key Exposure in Environment Variables — .env.example, src/infrastructure/config/env_manager.py. OPENAI_API_KEY and other API credentials are stored as plain-text environment variables. These can be exposed through container logs, process listings, Docker inspect commands, or memory dumps. Fix: Use secrets management systems (HashiCorp Vault, Docker Secrets, AWS Secrets Manager). Never log API keys. Rotate keys regularly. Implement key expiration policies. Use read-only filesystem mounts for secret files.
  • High · Unauthenticated WebSocket Endpoint — src/api/routes/websocket.py. The presence of 'src/api/routes/websocket.py' suggests real-time communication endpoints. WebSocket connections may not have proper authentication/authorization checks, potentially allowing unauthorized users to receive real-time data or send commands. Fix: Implement mandatory authentication on all WebSocket endpoints. Validate JWT tokens or session cookies before accepting connections. Add rate limiting and connection throttling. Implement proper authorization checks for each message type.
  • High · SQL Injection Risk in Task Repository — src/infrastructure/persistence/sqlite_task_repository.py. The presence of 'sqlite_task_repository.py' and dynamic query building patterns suggest potential SQL injection vulnerabilities, especially if user input (keywords, task names) is not properly parameterized. Fix: Use parameterized queries exclusively (prepared statements with placeholders). Never concatenate user input into SQL strings. Use ORM frameworks like SQLAlchemy. Implement input validation and sanitization. Add SQL query logging and monitoring.
  • High · No HTTPS Enforcement — docker-compose.yaml, src/app.py. The application exposes port 8000 without enforced HTTPS. Credentials (API keys, passwords) transmitted over HTTP can be intercepted by attackers on the network. Fix: Deploy behind a reverse proxy (Nginx, Traefik) with TLS/SSL. Generate valid SSL certificates. Redirect all HTTP traffic to HTTPS. Implement HSTS headers. Use secure cookie flags (Secure, HttpOnly, SameSite).
  • High · Missing CORS and Security Headers — src/app.py, src/api/routes. No evidence of CORS configuration or security headers (CSP, X-Frame-Options, X-Content-Type-Options) in FastAPI app. This exposes the API to cross-origin attacks and clickjacking. Fix: Implement proper CORS middleware with specific allowed origins (not '*'). Add security headers: Content-Security-Policy, X-Frame-Options: DENY, X-Content-Type-Options: nosniff, Strict-Transport-Security. Use FastAPI's CORSMiddleware with restrictive settings.
  • High · Unvalidated External API Integration — src/infrastructure/external/ai_client.py, src/infrastructure/external/notification_clients/. The application integrates with multiple external services (OpenAI, Telegram, WeChat, Bark, ntfy) without visible request validation or response parsing safeguards. Man-in-the-middle attacks or compromised APIs could inject malicious content. Fix: Validate all

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · Usagi-org/ai-goofish-monitor — RepoPilot