dreammis/social-auto-upload
自动化上传视频到社交媒体:抖音、小红书、视频号、tiktok、youtube、bilibili
Missing license — unclear to depend on
weakest axisno license — legally unclear; no CI workflows detected
no license — can't legally use code; no CI workflows detected
Documented and popular — useful reference codebase to read through.
no license — can't legally use code; no CI workflows detected
- ✓Last commit today
- ✓11 active contributors
- ✓Tests present
Show all 6 evidence items →Show less
- ⚠Concentrated ownership — top contributor handles 52% of recent commits
- ⚠No license — legally unclear to depend on
- ⚠No CI workflows detected
What would change the summary?
- →Use as dependency Concerns → Mixed if: publish a permissive license (MIT, Apache-2.0, etc.)
- →Fork & modify Concerns → Mixed if: add a LICENSE file
- →Deploy as-is Concerns → Mixed if: add a LICENSE file
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Great to learn from" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/dreammis/social-auto-upload)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/dreammis/social-auto-upload on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: dreammis/social-auto-upload
Generated by RepoPilot · 2026-05-07 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/dreammis/social-auto-upload shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
WAIT — Missing license — unclear to depend on
- Last commit today
- 11 active contributors
- Tests present
- ⚠ Concentrated ownership — top contributor handles 52% of recent commits
- ⚠ No license — legally unclear to depend on
- ⚠ No CI workflows detected
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live dreammis/social-auto-upload
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/dreammis/social-auto-upload.
What it runs against: a local clone of dreammis/social-auto-upload — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in dreammis/social-auto-upload | Confirms the artifact applies here, not a fork |
| 2 | Default branch main exists | Catches branch renames |
| 3 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 4 | Last commit ≤ 30 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of dreammis/social-auto-upload. If you don't
# have one yet, run these first:
#
# git clone https://github.com/dreammis/social-auto-upload.git
# cd social-auto-upload
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of dreammis/social-auto-upload and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "dreammis/social-auto-upload(\\.git)?\\b" \\
&& ok "origin remote is dreammis/social-auto-upload" \\
|| miss "origin remote is not dreammis/social-auto-upload (artifact may be from a fork)"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "sau_cli.py" \\
&& ok "sau_cli.py" \\
|| miss "missing critical file: sau_cli.py"
test -f "sau_backend.py" \\
&& ok "sau_backend.py" \\
|| miss "missing critical file: sau_backend.py"
test -f "myUtils/postVideo.py" \\
&& ok "myUtils/postVideo.py" \\
|| miss "missing critical file: myUtils/postVideo.py"
test -f "conf.example.py" \\
&& ok "conf.example.py" \\
|| miss "missing critical file: conf.example.py"
test -f "requirements.txt" \\
&& ok "requirements.txt" \\
|| miss "missing critical file: requirements.txt"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 30 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~0d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/dreammis/social-auto-upload"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
social-auto-upload is a Python/Vue automation tool that uploads video content to multiple Chinese and international social media platforms (TikTok, Bilibili, Douyin/抖音, Xiaohongshu/小红书, Kuaishou/快手, Tencent Video/视频号, Baijiahao/百家号) via browser automation (Playwright), API clients (biliup, xhs modules), and scheduled publishing. It solves the high-friction problem of manually uploading identical video content to 7+ platforms by providing unified uploaders, CLI tools, and AI agent skills for one-click multi-platform distribution. Multi-entrypoint monorepo: uploaders/ contains platform-specific modules (douyin_uploader, bilibili_uploader, xhs_uploader, etc.), examples/ holds runnable demo scripts for each platform, skills/ provides AI agent integrations (douyin-upload/, kuaishou-upload/, xiaohongshu-upload/, bilibili-upload/ with SKILL.md specs), myUtils/ handles auth/login/video posting utilities, db/ manages SQLAlchemy schema, and web UI code lives in frontend (Vue components in unspecified directories per language breakdown). CLI interface unified under sau command via docs/CLI.md.
👥Who it's for
Content creators, video production teams, and social media operations managers who publish the same videos across multiple Chinese platforms and TikTok. Also useful for AI agents (OpenClaw, Claude Code, Codex) that need to automate repetitive cross-platform publishing workflows without real-time human intervention.
🌱Maturity & risk
Actively developed with recent commit activity visible in documentation plans dated 2026-03-25 (skill distribution, Bilibili CLI implementation, browser CLI unification). Bilibili, Douyin, and Xiaohongshu uploaders are marked production-ready with CLI and Skill support; Kuaishou/百家号/视频号/TikTok have functional but less mature implementations. No obvious test suite visible (no tests/ directory in file list), which is a maturity concern despite active development.
High dependency count (~70 packages) including pinned versions (Playwright 1.52.0, biliup 0.4.98) that may lag upstream security fixes; heavy reliance on browser automation via Playwright which is fragile to UI changes on target platforms. Single maintainer risk (dreammis) with no visible GitHub Actions CI/CD. Platform-specific scrapers (Douyin, Xiaohongshu) using xhs/ykdl modules may break frequently as platforms update login flows and anti-bot measures.
Active areas of work
Active refactor toward unified CLI and agent skill distribution: docs/superpowers/plans/ contains implementation roadmaps for Bilibili CLI (2026-03-25), browser CLI unification design (2026-03-25), and Xiaohongshu shallow alignment (2026-03-25). Douyin, Xiaohongshu, Kuaishou marked as current mainline focus with CLI/Skill completion. Bilibili runtime auto-preparation of biliup in progress. TikTok and other platforms awaiting similar unification.
🚀Get running
Clone: git clone https://github.com/dreammis/social-auto-upload.git && cd social-auto-upload. Install: uv sync (preferred per agent bootstrap docs) or pip install -r pyproject.toml (uses pyproject.toml with dependencies). Copy config: cp conf.example.py conf.py and edit with API keys/cookies. Run: python -m examples.upload_to_douyin or sau --help for CLI mode, or deploy via docker build -t social-auto-upload . using provided Dockerfile.
Daily commands:
Development: uv run python -m examples.upload_to_douyin after setting DOUYIN_COOKIE in conf.py. Production CLI: uv run sau upload --platform douyin --video /path/to/video.mp4 --title '视频标题' (exact syntax in docs/CLI.md). Web server: uv run flask --app main:app run --host 0.0.0.0 --port 8000 (inferred from Flask dependency and docs structure, actual entry point unspecified). Docker: docker build -t social-auto-upload . && docker run -e PLATFORM=douyin -v /videos:/app/videos social-auto-upload.
🗺️Map of the codebase
sau_cli.py— Main CLI entry point for the entire social-auto-upload system; every contributor must understand how video uploads are initiated.sau_backend.py— Backend Flask server that handles API requests for accounts, materials, and publishing; core backend logic.myUtils/postVideo.py— Central video posting module that orchestrates uploads across all supported platforms; load-bearing abstraction for platform dispatch.conf.example.py— Configuration template showing all required credentials and settings for each platform; essential for understanding platform setup.requirements.txt— Dependencies including biliup, aiohttp, and platform-specific libraries; defines which uploaders and async frameworks are used.skills/bilibili-upload/SKILL.md— Specification for the new skill-based architecture for platform uploads; critical for understanding planned refactoring direction.examples/upload_to_douyin.py— Reference example showing complete video upload flow to Douyin; template for understanding platform integration patterns.
🛠️How to make changes
Add Support for a New Platform
- Create a new example script at examples/upload_to_{platform}.py that imports platform-specific uploader from third-party library (e.g., biliup for Bilibili) (
examples/upload_to_{platform}.py) - Add platform credentials and configuration to conf.example.py with required fields (api_key, cookie, user_id, etc.) (
conf.example.py) - Add platform case handling in myUtils/postVideo.py dispatch_to_platform() function to route to correct uploader (
myUtils/postVideo.py) - Add login/cookie extraction example at examples/get_{platform}_cookie.py following existing patterns (
examples/get_{platform}_cookie.py) - Create skills/{platform}-upload/SKILL.md with CLI contract and runtime requirements (
skills/{platform}-upload/SKILL.md)
Add a New Frontend View (Account/Material Management)
- Create Vue 3 component at sau_frontend/src/views/{FeatureName}.vue following AccountManagement.vue or MaterialManagement.vue pattern (
sau_frontend/src/views/{FeatureName}.vue) - Create API client module at sau_frontend/src/api/{feature}.js wrapping backend HTTP calls using request.js utility (
sau_frontend/src/api/{feature}.js) - Create Pinia store at sau_frontend/src/stores/{feature}.js to manage component state (
sau_frontend/src/stores/{feature}.js) - Register route in sau_frontend/src/router/index.js and add navigation link to App.vue (
sau_frontend/src/router/index.js)
Add a New Backend API Endpoint
- Define Flask route in sau_backend.py with @app.route() decorator, accepting JSON and returning JSON responses (
sau_backend.py) - Create corresponding API client method in sau_frontend/src/api/index.js using axios with base URL and error handling (
sau_frontend/src/api/index.js) - Document endpoint behavior in sau_backend/README.md with request/response examples (
sau_backend/README.md) - Add database schema changes if needed via alembic migrations (see alembic dependency) (
db/createTable.py)
Migrate an Upload Module to New Skills Architecture
- Create skills/{platform}-upload/SKILL.md defining CLI contract, input parameters, and success criteria (
skills/{platform}-upload/SKILL.md) - Add CLI reference examples at skills/{platform}-upload/scripts/examples/ showing command invocation (
skills/{platform}-upload/scripts/examples/{platform}_commands.sh) - Document runtime requirements and dependencies in skills/{platform}-upload/references/runtime-requirements.md (
skills/{platform}-upload/references/runtime-requirements.md) - Transition myUtils/postVideo.py dispatch to call new skill CLI instead of direct library import (
myUtils/postVideo.py)
🔧Why these technologies
- Flask + aiohttp — Enables async I/O for parallel platform uploads without blocking; critical for multi-platform batch publishing.
- Vue 3 + Vite — Modern SPA framework with fast dev server and build tooling; provides responsive UI for account/material management.
- biliup library — Specialized Bilibili uploader handling platform-specific auth and metadata; reduces custom integration code.
- Pinia state management — Centralized store for account credentials and app state; reduces prop drilling and enables cross-component communication.
- Alembic migrations — Version-controlled database schema changes; enables team collaboration on data model evolution.
⚖️Trade-offs already made
-
Separate myUtils modules (auth.py, postVideo.py, login.py) instead of single unified service class
- Why: Allows independent evolution of each concern and easier per-platform customization without tight coupling.
- Consequence: Scattered logic across files; requires careful import management and makes tracing upload flow harder for new contributors.
-
Cookie-based authentication stored in conf.py rather than secure credential vault
- Why: Simpler implementation and works with headless browser automation; matches existing platform SDK requirements.
- Consequence: Credentials are at higher security risk if conf.py is exposed; requires careful .gitignore and encryption at rest.
-
Skills architecture introduced alongside legacy monolithic postVideo.py upload dispatcher
- Why: Allows gradual migration to modular skills without forcing rewrite of all platform uploaders at once.
- Consequence: Duplication of upload logic between old and new paths during transition; increases maintenance burden.
-
Backend and frontend tightly integrated in single repo (sau_backend.py + sau_frontend/)
- Why: Reduces deployment complexity and simplifies credential passing between frontend auth UI and backend upload logic.
- Consequence: Monolithic deployment; cannot scale frontend and backend independently or use alternative frontend clients.
🚫Non-goals (don't propose these)
- Does not handle real-time video streaming (batch/pre-recorded uploads only)
🪤Traps & gotchas
Platform cookies expire frequently and must be refreshed manually (examples/get_*_cookie.py scripts required per platform); some uploaders expect local Chromium installation for Playwright (install via playwright install after uv sync); biliup auto-downloads during runtime (check disk space ~500MB); Xiaohongshu/Kuaishou uploaders use browser automation which is slow (~30-60s per upload); Flask async mode requires eventlet which may conflict with native asyncio in some Python versions; database migrations via Alembic (migrations/ dir not visible) may not be initialized—check db/ setup before first run; video codec constraints are platform-specific (Bilibili prefers H.264, Xiaohongshu H.265); no explicit test suite means regressions only caught by manual testing or user reports.
🏗️Architecture
💡Concepts to learn
- Browser Automation via Playwright — Xiaohongshu, Kuaishou, Baijiahao uploaders rely on Playwright to simulate human browser interaction (login, form fill, upload) to bypass anti-bot measures; understanding page selectors, waits, and headless limitations is essential for debugging platform-specific failures
- API Client Wrappers (biliup, xhs modules) — Rather than reverse-engineering every platform, this project wraps native/community Python clients (biliup for Bilibili, xhs for Xiaohongshu) that handle protocol details; knowing when to use API vs. browser automation is key to adding new platforms efficiently
- Platform-Specific Cookie/Session Management — Each platform (Douyin, Xiaohongshu, etc.) requires different authentication methods—some via cookie injection, some via QR login in browser, some via API tokens; conf.py and examples/get_*_cookie.py show how to abstract this pattern across platforms
- Scheduled Job Persistence with SQLAlchemy ORM — The project uses SQLAlchemy to store scheduled upload jobs in a database (see db/createTable.py) so uploads can be deferred and retried; understanding ORM migrations and query patterns is needed to extend job scheduling features
- AI Agent Skill System — The skills/ directory contains SKILL.md specs that allow AI agents (OpenClaw, Claude Code) to invoke uploads; this is a custom pattern where agents understand skill metadata and invoke platform-specific handlers without hardcoding—critical for the 'Agent Bootstrap' onboarding flow
- Video Codec & Container Negotiation — Different platforms enforce different video codec requirements (Bilibili H.264, Xiaohongshu H.265 preferred); uploaders must validate/transcode incoming videos; understanding yt-dlp and ffmpeg integration (inferred via streamlink, pillow dependencies) is needed for robust handling
- Async/Await Concurrency with aiohttp & eventlet — The backend uses aiohttp for async HTTP requests and eventlet for greenlet-based concurrency to handle multiple uploads in parallel; understanding coroutines, event loops, and greenlet limitations prevents deadlocks when adding concurrent platform operations
🔗Related repos
waydroid/waydroid— Alternative approach to multi-platform mobile app automation; useful if browser automation proves too fragile for updated Chinese platformsytdl-org/youtube-dl— Inspirational foundation for yt-dlp (already a dependency); shows pattern for extracting/processing video metadata across platformsbiliup/biliup— Direct upstream dependency for Bilibili uploads; understanding its API and CLI (biliup-rs) helps debug Bilibili-specific issues in this projectHyuto/xhs— Direct upstream dependency for Xiaohongshu cookie extraction and API calls; critical for maintaining XHS uploader when platform login flows changerequests/requests-html— Related scraping tool for simpler web automation if Playwright browser overhead becomes prohibitive for lightweight login-only flows
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add integration tests for upload_to_* examples with mock platforms
The repo has 9 example upload scripts (upload_to_douyin.py, upload_to_bilibili.py, upload_to_xhs.py, etc.) but no test coverage. These are critical entry points for users. Adding integration tests with mocked API responses would catch breaking changes early and serve as executable documentation for how each platform's upload flow works.
- [ ] Create tests/test_examples/ directory with pytest fixtures for mocking Playwright/HTTP responses
- [ ] Add tests/test_examples/test_douyin_upload.py, test_bilibili_upload.py, test_xhs_upload.py covering the 9 examples in examples/ directory
- [ ] Mock the browser automation calls in myUtils/postVideo.py to avoid needing real credentials
- [ ] Add conftest.py with shared fixtures for video file creation and platform response mocking
Extract shared uploader logic from examples into sau_backend/uploaders/ module structure
Currently, upload logic is scattered across examples/ and myUtils/postVideo.py. The docs reference 'uploader modules' (docs/skill-distribution.md) but the actual uploader classes aren't organized in a consistent way. Formalizing uploaders as a plugin system under sau_backend/uploaders/ with a base class would enable easier platform addition and maintenance.
- [ ] Create sau_backend/uploaders/init.py and sau_backend/uploaders/base_uploader.py with abstract base class (methods: authenticate, upload_video, schedule_post)
- [ ] Create platform-specific uploaders: DouyinUploader, BilibiliUploader, XhsUploader, TiktokUploader classes that inherit from base_uploader.py
- [ ] Refactor myUtils/postVideo.py to delegate to uploader classes instead of platform-specific logic
- [ ] Add sau_backend/uploaders/registry.py to dynamically load uploaders, making platform support discoverable
Add Python type hints and mypy validation for core upload modules
The codebase lacks type hints in critical modules like myUtils/auth.py, myUtils/postVideo.py, and myUtils/login.py. This makes it difficult for contributors to understand function signatures and return types. Adding type hints + mypy validation in CI would prevent runtime errors and improve IDE support.
- [ ] Add type hints to myUtils/auth.py (function signatures, return types for cookie retrieval functions)
- [ ] Add type hints to myUtils/postVideo.py covering video upload parameters and response structures
- [ ] Add type hints to myUtils/login.py for login credential handling
- [ ] Create pyproject.toml [tool.mypy] config section and add mypy check to CI workflow (requires new GitHub Actions file: .github/workflows/mypy.yml)
🌿Good first issues
- Add comprehensive test suite for uploaders/douyin_uploader.py using pytest + mock Playwright to prevent regressions when Douyin UI changes; currently no tests/ directory visible despite being the 'main focus' platform
- Document platform-specific video requirements (resolution, codec, duration, file size limits) in docs/platform-specs.md since only README table exists; extract constraints from uploaders/ code into user-facing docs
- Implement missing CLI commands for Tencent Video/视频号 and Baijiahao (currently marked ❌ in feature table) by creating uploaders/tencent_uploader.py and skills/tencent-upload/SKILL.md following existing douyin/bilibili pattern
⭐Top contributors
Click to expand
Top contributors
- @dreammis — 52 commits
- @Deroino — 16 commits
- @LeeDebug — 13 commits
- @leordsong — 8 commits
- @gzxy0102 — 4 commits
📝Recent commits
Click to expand
Recent commits
78b77f6— Merge pull request #202 from zzzzzy2k/bugfix/zzyBugFix (dreammis)8c30f4c— fix: 修复小红书上传只有一个标签的问题 (zzzzzy2k)34a3b3b— ok (dreammis)31e2713— 1 (dreammis)13984e6— change context (dreammis)94fedc5— update (dreammis)2eb6633— 腾讯重构一版 (dreammis)6fa68d4— 腾讯重构一版 (dreammis)ff0285c— 腾讯重构一版 (dreammis)cccfea0— Merge pull request #193 from ikiaichat/bugfix/xhs (dreammis)
🔒Security observations
Failed to generate security analysis.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.