RepoPilotOpen in app →

timqian/chinese-independent-blogs

中文独立博客列表

Healthy

Healthy across the board

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 4d ago
  • 50+ active contributors
  • Distributed ownership (top contributor 48% of recent commits)
  • MIT licensed
  • CI configured
  • No test directory detected

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/timqian/chinese-independent-blogs)](https://repopilot.app/r/timqian/chinese-independent-blogs)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/timqian/chinese-independent-blogs on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: timqian/chinese-independent-blogs

Generated by RepoPilot · 2026-05-07 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/timqian/chinese-independent-blogs shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit 4d ago
  • 50+ active contributors
  • Distributed ownership (top contributor 48% of recent commits)
  • MIT licensed
  • CI configured
  • ⚠ No test directory detected

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live timqian/chinese-independent-blogs repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/timqian/chinese-independent-blogs.

What it runs against: a local clone of timqian/chinese-independent-blogs — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in timqian/chinese-independent-blogs | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | Last commit ≤ 34 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>timqian/chinese-independent-blogs</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of timqian/chinese-independent-blogs. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/timqian/chinese-independent-blogs.git
#   cd chinese-independent-blogs
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of timqian/chinese-independent-blogs and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "timqian/chinese-independent-blogs(\\.git)?\\b" \\
  && ok "origin remote is timqian/chinese-independent-blogs" \\
  || miss "origin remote is not timqian/chinese-independent-blogs (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 34 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~4d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/timqian/chinese-independent-blogs"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

A curated list of Chinese independent blogs aggregated with RSS feeds, served as a Markdown table, OPML feed, and web interface. It solves the discoverability problem for Chinese developers and readers who want to follow high-quality indie tech blogs without relying on centralized platforms, using blogs-original.csv as the source of truth and Python scripts to generate multiple output formats. Flat structure: blogs-original.csv holds the source data; Python scripts (linter.py, readme_render.py, opml_generator.py) validate and transform it into multiple output formats (Markdown README, OPML feed file); GitHub Actions workflows (in .github/workflows/) run on PR/push to lint and update outputs automatically.

👥Who it's for

Chinese software developers, technical writers, and blog readers who want to discover and subscribe to independent Chinese tech blogs; contributors who submit new blogs via pull requests; RSS aggregator users who import the OPML feed.

🌱Maturity & risk

Actively maintained with CI/CD automation (GitHub Actions workflows: pr_lint.yml and updateInfo.yml), but relatively simple scope with ~8.7KB of Python code and no test suite. The repo appears stable and focused on curation rather than complex features, with community engagement via Telegram and sponsorships, suggesting production-ready status for its specific use case.

Single primary maintainer (timqian) creates maintenance risk if they become unavailable. No test coverage means changes to linter.py or opml_generator.py could break the build without immediate detection. The data source (blogs-original.csv) and feed URLs are manually maintained, creating staleness and link-rot risks that require ongoing human curation.

Active areas of work

Automated PR linting and README/feed generation via GitHub Actions workflows. The updateInfo.yml workflow suggests periodic data refresh. Primary activity appears to be accepting blog submissions via PRs and maintaining the CSV data source.

🚀Get running

git clone https://github.com/timqian/chinese-independent-blogs.git
cd chinese-independent-blogs
npm install
python3 linter.py
python3 readme_render.py

Daily commands:

npm install
python3 linter.py          # Validate CSV against rules
python3 readme_render.py    # Generate README.md from CSV
python3 opml_generator.py   # Generate feed.opml from CSV

🗺️Map of the codebase

  • blogs-original.csv: Single source of truth containing all blog metadata (feed URL, intro, address, tags) — any modification here cascades to README and OPML
  • linter.py: Validates CSV format, feed URLs, and required fields; enforced by pr_lint.yml workflow on every PR
  • readme_render.py: Transforms CSV into the Markdown table displayed in README.md; run by updateInfo.yml workflow
  • opml_generator.py: Converts CSV to feed.opml for RSS reader imports; generates the aggregate feed subscription file
  • .github/workflows/pr_lint.yml: Runs linter.py on every PR to enforce data quality before merge
  • .github/workflows/updateInfo.yml: Scheduled/triggered workflow that regenerates README and OPML from CSV and auto-commits changes
  • package.json: Declares markdown-table and graphql dependencies used by rendering scripts

🛠️How to make changes

To add blogs: edit blogs-original.csv and submit PR (GitHub Actions will lint and regenerate outputs). To fix validation rules: modify linter.py. To change README format: edit readme_render.py. To alter OPML structure: modify opml_generator.py. Check .github/workflows/ to understand automation triggers.

🪤Traps & gotchas

Python scripts assume Python 3 but version not pinned (no requirements.txt or pyproject.toml — check if specific packages are needed). The CSV format is strict; malformed rows cause linter failures but errors may not be immediately obvious. Feed URLs are not validated for liveness — dead feeds remain in the list unless manually reported. The updateInfo.yml workflow auto-commits; if you run rendering scripts locally, manual CSV edits may conflict with auto-generated changes from the workflow.

💡Concepts to learn

  • OPML (Outline Processor Markup Language) — This project exports data as feed.opml for bulk RSS subscriptions — understanding OPML structure is essential for maintaining opml_generator.py and enabling users to import all blogs into their readers at once
  • RSS Feed Format — The entire project aggregates RSS feed URLs; understanding RSS structure helps validate feed_generator.py output and debug feed-related issues
  • GitHub Actions Workflows — Automation is critical here: pr_lint.yml and updateInfo.yml run Python scripts automatically; understanding workflow syntax is necessary to extend CI/CD (e.g., adding feed liveness checks)
  • CSV Parsing and Validation — blogs-original.csv is the source of truth but has no formal schema; linter.py does ad-hoc validation — understanding CSV edge cases (escaping, missing fields, encoding) prevents data corruption during renders
  • Markdown Table Generation — readme_render.py uses markdown-table to convert CSV → README table format; knowing this dependency helps debug formatting issues and adds features like sorting or filtering
  • Data Schema and Curation — No formal schema definition exists (no JSON Schema, TypeScript types, or Pydantic model); this is a coordination bottleneck — proposing and implementing a formal schema would improve contributor experience and automate more validations
  • Inter-process Data Pipeline — Three separate Python scripts consume and transform the same CSV independently; they must stay synchronized — understanding idempotency and ordering prevents inconsistencies between README and OPML outputs
  • RSS-Bridge/rss-bridge — Generates RSS feeds for websites that don't provide them natively; complements this project by enabling feed creation for blogs without native RSS
  • kill-the-newsletter/kill-the-newsletter — Converts email newsletters to RSS feeds; alternative subscription method for users of this blog list
  • GetStream/awesome-feed — Similar curated feed/blog aggregation project; shares the same problem domain of discovering and organizing feeds
  • timqian/openmoji — Another project by the same maintainer; shows pattern of open-source curation and community-driven lists
  • awesome-selfhosted/awesome-selfhosted — Parallel curation model for self-hosted software; demonstrates similar GitHub-based list maintenance patterns

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add validation tests for blogs-original.csv with linter.py

The repo has a linter.py file that likely validates blog entries, but there's no automated test suite to catch data quality issues. This would prevent malformed entries from being merged and ensure RSS feeds are valid before they reach users. The existing pr_lint.yml workflow could be enhanced to run these tests.

  • [ ] Create tests/test_linter.py with unit tests for linter.py functions
  • [ ] Add test cases for invalid RSS feeds, missing fields, duplicate entries in blogs-original.csv
  • [ ] Update .github/workflows/pr_lint.yml to run pytest on pull requests
  • [ ] Add pytest and related dependencies to package.json or create requirements.txt for Python deps

Add CSV schema validation and type checking for blogs-original.csv

With hundreds of blog entries in blogs-original.csv, there's no enforced schema to catch common errors (missing RSS URL, invalid domain format, etc.). Adding a jsonschema-like validator would improve data quality and provide clearer error messages when contributors submit invalid entries.

  • [ ] Create a schema_validator.py that defines required columns (RSS feed, Introduction, Address, tags) and their validation rules
  • [ ] Add checks for valid URL format, non-empty descriptions, and consistent formatting
  • [ ] Integrate validator into pr_lint.yml workflow to check new/modified rows in blogs-original.csv
  • [ ] Document the schema and validation rules in README.md under the submission guidelines

Add test coverage for opml_generator.py and readme_render.py output

These scripts generate feed.opml and the blog table in README.md, but there are no tests verifying the output is correct. Invalid OPML would break podcast/feed readers; malformed markdown would break the README display. Automated tests would catch regressions.

  • [ ] Create tests/test_opml_generator.py to validate feed.opml structure (valid XML, required OPML fields, correct feed URLs)
  • [ ] Create tests/test_readme_render.py to verify markdown table syntax and that all blog entries appear in README.md
  • [ ] Add test fixtures with sample blog data in tests/fixtures/sample_blogs.csv
  • [ ] Integrate tests into pr_lint.yml or create a separate test.yml workflow that runs on all PRs

🌿Good first issues

  • Add a test suite (missing entirely): create tests/test_linter.py to verify CSV validation rules and test_readme_render.py to check Markdown table generation — this catches regressions when linter.py or readme_render.py change.
  • Document the CSV schema: create CONTRIBUTING.md with column definitions (required fields, valid tag values, feed URL format rules) and add a schema comment block to blogs-original.csv itself — reduces PR rejections from malformed submissions.
  • Add a CI check for feed URL liveness: extend linter.py with an optional flag to verify each RSS feed URL returns HTTP 200 (or document why it's skipped) — catches stale feeds before they're merged.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 5ca46bd — update README and feed.opml after adding blog(s) (actions-user)
  • 2f44cc6 — 傲雪の, https://www.oxue.de, https://www.oxue.de/rss.xml, 技术; 写作; 生活; 随笔 (xxytan)
  • 33d6d06 — update README and feed.opml after adding blog(s) (actions-user)
  • 707311a — Yourlai's Blog, https://yourlai.com/, https://yourlai.com/feed/, 编程; 嵌入式; 技术; 摄影 (Yourlai2015)
  • 8927ce6 — update README and feed.opml after adding blog(s) (actions-user)
  • e4582e9 — 涵哲子居, https://afipo.top/, https://afipo.top/rss.xml, 日常; 随笔; 乱七八糟 (afingpo)
  • 5f72905 — update README and feed.opml after adding blog(s) (actions-user)
  • 335fa46 — 廊桥遗梦, https://blog.moran.im, https://blog.moran.im/rss.xml, 随笔; 生活; 教程 (moranfong)
  • 2875ffa — update README and feed.opml after adding blog(s) (actions-user)
  • 4515fa6 — 傥师妹TangShiMei的小空间, https://blog.224418.xyz/, https://blog.224418.xyz/rss2.xml, 技术; 生活; 随笔; 折腾 (TangShiMei02)

🔒Security observations

The codebase has moderate security concerns primarily related to outdated dependencies, lack of input validation/output encoding in data processing pipelines, and missing security infrastructure. The project processes user-submitted blog data without visible security controls, creating potential XSS and injection risks. No automated testing or security policies are in place. The GitHub workflow automation (pr_lint.yml, updateInfo.yml) should be reviewed for secure execution practices. Immediate actions: update dependencies, implement input validation, add security.md, and establish test coverage for data processing functions.

  • High · Outdated GraphQL Dependencies — package.json - dependencies. The package.json specifies graphql@^16.8.1 and graphql-request@^4.3.0. These versions are outdated and may contain known security vulnerabilities. The caret (^) allows minor version updates which could introduce breaking changes or security issues. Fix: Update to the latest stable versions of graphql and graphql-request. Run 'npm audit' to identify specific vulnerabilities and 'npm update' to patch them. Consider using npm audit --fix for automated patching.
  • Medium · No Test Coverage — package.json - scripts. The package.json shows 'test' script is not implemented ('echo "Error: no test specified" && exit 1'). Without automated tests, security vulnerabilities in data processing pipelines (CSV parsing, OPML generation, etc.) may go undetected. Fix: Implement comprehensive test suite including unit tests for data validation, CSV parsing, and OPML generation. Add security-focused tests for input validation and output encoding.
  • Medium · Potential XSS Risk in Data Processing — readme_render.py, opml_generator.py, linter.py. The codebase processes blog data (blogs-original.csv, OPML generation) and renders it (readme_render.py, opml_generator.py). Without visible sanitization logic, user-submitted blog URLs, titles, or descriptions could contain malicious content that gets rendered unsanitized. Fix: Implement input validation and output encoding for all user-submitted data. Use established libraries for OPML and Markdown generation that automatically escape dangerous content. Validate URLs and sanitize HTML/XML output.
  • Medium · No Security Policy or Vulnerability Disclosure Process — Repository root. The repository lacks a SECURITY.md file, security.txt, or clear vulnerability disclosure guidelines. This makes it difficult for security researchers to report issues responsibly. Fix: Create a SECURITY.md file with clear instructions for reporting security vulnerabilities. Include contact information and expected response times. Consider using GitHub's security advisory feature.
  • Low · Missing Dependency Lock File Guarantee — package.json. While package-lock.json exists, npm dependencies (graphql, graphql-request, markdown-table) rely on semantic versioning which could allow unexpected minor version updates that introduce security issues. Fix: Use exact versions in package.json instead of caret (^) ranges for security-critical dependencies. Use 'npm ci' instead of 'npm install' in CI/CD pipelines to ensure lock file is respected.
  • Low · CSV Injection Risk — blogs-original.csv, linter.py. The blogs-original.csv file is likely processed by linter.py and readme_render.py. If CSV parsing doesn't properly escape formula-starting characters (=, +, -, @), malicious entries could execute code when opened in spreadsheet applications. Fix: Implement CSV injection prevention by escaping cells that start with special characters (=, +, -, @, tab). Use CSV libraries with built-in protection. Validate all user-submitted content before adding to CSV.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · timqian/chinese-independent-blogs — RepoPilot