great-expectations/great_expectations
Always know what to expect from your data.
Healthy across the board
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 1d ago
- ✓16 active contributors
- ✓Apache-2.0 licensed
Show all 6 evidence items →Show less
- ✓CI configured
- ✓Tests present
- ⚠Concentrated ownership — top contributor handles 53% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/great-expectations/great_expectations)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/great-expectations/great_expectations on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: great-expectations/great_expectations
Generated by RepoPilot · 2026-05-07 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/great-expectations/great_expectations shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit 1d ago
- 16 active contributors
- Apache-2.0 licensed
- CI configured
- Tests present
- ⚠ Concentrated ownership — top contributor handles 53% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live great-expectations/great_expectations
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/great-expectations/great_expectations.
What it runs against: a local clone of great-expectations/great_expectations — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in great-expectations/great_expectations | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch develop exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 31 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of great-expectations/great_expectations. If you don't
# have one yet, run these first:
#
# git clone https://github.com/great-expectations/great_expectations.git
# cd great_expectations
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of great-expectations/great_expectations and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "great-expectations/great_expectations(\\.git)?\\b" \\
&& ok "origin remote is great-expectations/great_expectations" \\
|| miss "origin remote is not great-expectations/great_expectations (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify develop >/dev/null 2>&1 \\
&& ok "default branch develop exists" \\
|| miss "default branch develop no longer exists"
# 4. Critical files exist
test -f "README.md" \\
&& ok "README.md" \\
|| miss "missing critical file: README.md"
test -f "CONTRIBUTING_CODE.md" \\
&& ok "CONTRIBUTING_CODE.md" \\
|| miss "missing critical file: CONTRIBUTING_CODE.md"
test -f ".github/workflows/ci.yml" \\
&& ok ".github/workflows/ci.yml" \\
|| miss "missing critical file: .github/workflows/ci.yml"
test -f "setup.py" \\
&& ok "setup.py" \\
|| miss "missing critical file: setup.py"
test -f "constraints-dev.txt" \\
&& ok "constraints-dev.txt" \\
|| miss "missing critical file: constraints-dev.txt"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 31 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~1d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/great-expectations/great_expectations"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Great Expectations is a Python data validation framework that lets data teams define, document, and validate data quality through 'Expectations'—expressive unit tests for datasets. It validates data across 50+ backends (pandas, Spark, SQL databases, DuckDB, etc.), auto-generates documentation from validation results, and integrates into data pipelines to catch issues before they propagate downstream. Monolithic Python package structure: great_expectations/ contains core validation engine (expectations, validators, data_sources), docs/ holds Docusaurus-based documentation, assets/docker/ provides compose files for testing against 15+ databases (PostgreSQL, MySQL, ClickHouse, Databricks, etc.). CI workflows in .github/workflows/ (ci.yml, releaser workflows) automate testing and publishing.
👥Who it's for
Data engineers, analytics engineers, and data scientists who need to validate data quality in production pipelines, catch schema regressions, and document data contracts for cross-functional teams. Also used by data platform teams building internal data quality infrastructure.
🌱Maturity & risk
Highly mature and production-ready. The project has 8,900+ GitHub stars, active CI/CD via GitHub Actions and Azure DevOps, extensive test coverage tracked via Codecov, Ruff linting, and pre-commit hooks. Supports Python 3.10–3.13 with experimental 3.14+ support. Regular releases published to PyPI with documented compatibility matrices for 50+ data sources.
Low risk for core functionality; codebase is well-established with comprehensive CI/CD. However, risk exists in the wide breadth of integrations (50+ data sources)—not all may receive equal maintenance. The reliance on external system compatibility (Spark, Databricks, ClickHouse, MSSQL, etc.) means breaking changes in those systems can affect GX. Check the specific data source you use in the compatibility reference before upgrading.
Active areas of work
Active development on data source support and validation framework enhancements. GitHub workflows show focus on: PR title validation, stale issue cleanup, release automation (tag → publish pipeline), and architecture documentation via CodeSee. Docker compose files indicate ongoing integration testing against ClickHouse, Databricks, MSSQL, MySQL, PostgreSQL, and SingleStore.
🚀Get running
git clone https://github.com/great-expectations/great_expectations.git
cd great_expectations
pip install -e . # Install in editable mode
# For development with all dependencies:
pip install -e ".[dev,test]"
Daily commands:
No single 'dev server' in traditional sense. Instead: (1) Run tests: pytest tests/ (2) Validate a dataset with GX: python -c "import great_expectations as ge; df = ge.dataset.PandasDataset({'col': [1,2,3]}); print(df.expect_column_values_to_be_in_set('col', [1,2,3,4]))" (3) For specific database testing, spin up services: cd assets/docker/postgresql && docker-compose up, then connect via SQLAlchemy URI.
🗺️Map of the codebase
README.md— Entry point documentation explaining Great Expectations' mission to validate data quality and manage expectations across pipelines.CONTRIBUTING_CODE.md— Mandatory contributor guide covering development practices, testing conventions, and coding standards for the 600-file codebase..github/workflows/ci.yml— Primary CI/CD pipeline orchestrating tests, builds, and deployments across Python versions and database backends.setup.py— Package configuration defining dependencies, entry points, and plugin architecture for Great Expectations installation.constraints-dev.txt— Development dependency pinning ensuring reproducible test environments across Python 3.10–3.13 and constraint matrices..github/teams.yml— Team ownership map defining code review responsibilities and maintainer assignments across modules..pre-commit-config.yaml— Pre-commit hooks enforcing linting, formatting, and validation standards before code commits.
🛠️How to make changes
Add a Custom Expectation
- Create expectation class extending base expectation in contrib plugin structure following naming conventions (
contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations) - Register expectation in plugin's init.py to make discoverable by Great Expectations (
contrib/capitalone_dataprofiler_expectations/__init__.py) - Add validation logic implementing expectation semantics (e.g., column statistics, uniqueness, range checks) (
contrib/capitalone_dataprofiler_expectations/capitalone_dataprofiler_expectations/expectations) - Document expectation with docstring and usage examples in contrib README (
contrib/capitalone_dataprofiler_expectations/README.md)
Add Support for a New Database Backend
- Create docker-compose.yml for new database in assets/docker/{db_name}/ for test fixtures (
assets/docker/postgresql/docker-compose.yml) - Implement SQLAlchemy dialect connector in main codebase (referenced by CI constraint matrix) (
ci/azure-pipelines-sqlalchemy-compatibility.yml) - Add database-specific tests to CI matrix in dev-install-matrix.yml (
ci/dev-install-matrix.yml) - Update documentation with connection examples and limitations in docs snippets (
ci/checks/validate_docs_snippets.py)
Add a New CI Check or Validation
- Create Python or shell check script in ci/checks/ directory following existing naming patterns (
ci/checks/check_integration_test_gets_run.py) - Reference check script from main CI workflow to execute during PR validation (
.github/workflows/ci.yml) - Document check purpose and failure conditions in ci/README.md (
ci/README.md)
Publish Updated Documentation
- Validate all code snippets in documentation render correctly via snippet checker (
ci/checks/validate_docs_snippets.py) - Push to develop branch; ReadTheDocs automatically builds and deploys per .readthedocs.yml (
.readthedocs.yml) - Optionally trigger Algolia search index update to make expectations searchable (
assets/scripts/AlgoliaScripts/upload_s3_expectation_to_algolia.js)
🔧Why these technologies
- Python (3.10–3.13 support) — Primary language for data engineering workflows; supports pandas, SQLAlchemy, and data science ecosystem integration.
- SQLAlchemy — Unified database abstraction enabling support for 10+ SQL dialects (PostgreSQL, MySQL, Snowflake, Databricks, etc.) with single codebase.
- Docker Compose — Test fixture orchestration for 8+ database backends without requiring pre-existing infrastructure; reproducible CI environments.
- Azure Pipelines + GitHub Actions — Dual CI/CD for constraint testing matrices (py310-313, pandas v1/v2, pydantic v1/v2) and release automation.
- ReadTheDocs + Algolia — Documentation-as-code with automated builds and search indexing for 500+ expectation definitions and API reference.
- Node.js/Algolia Scripts — Search infrastructure for expectation discovery and documentation SEO without rebuilding Python for indexing.
⚖️Trade-offs already made
-
Plugin architecture in contrib/ for custom expectations
- Why: Allows community extensions without modifying core library; reduces maintenance burden for niche integrations.
- Consequence: Requires plugin authors to maintain separate packages; discovery depends on documentation and community adoption.
-
Docker-based test fixtures instead of cloud-managed test instances
- Why: Eliminates cloud costs and auth complexity; enables offline CI runs and developer reproducibility.
- Consequence: Resource-intensive CI pipeline; larger docker-compose files; doesn't test against fully managed service quirks (e.g., Databricks cluster cold starts).
-
Dual validation engines (core logic + plugin system)
- Why: Core expectations handle common cases; plugins enable domain-specific validators without bloating main library.
- Consequence: Potential inconsistency in behavior between core and plugin expectations; testing matrix explodes with plugin combinations.
-
Constraint testing matrices (py310-313, pandas/pydantic versions)
- Why: Ensures forward compatibility and dependency flexibility; supports diverse production environments.
- Consequence: Exponential CI runtime growth; maintenance burden for fixing compatibility regressions; slower feedback loops.
🚫Non-goals (don't propose these)
- Does not enforce data governance policies or access controls—focuses on validation logic, not authorization.
- Not a real-time streaming validator—designed for batch validation of data snapshots, not event-stream monitoring.
- Does not handle ETL orchestration—integrates with tools like Airflow/Dagster but does not schedule or execute pipelines.
- Not a data catalog or lineage tool—validates data contracts independently of metadata discovery systems.
- Does not support graph databases or document stores as first-class data sources—focuses on SQL/tabular backends.
- Not a machine learning training framework—validates data quality upstream of model training, not during model inference.
🪤Traps & gotchas
- SQLAlchemy URI syntax varies by backend: test against correct dialect (postgresql://, mysql+pymysql://, etc.) in
assets/docker/*/docker-compose.yml. 2) Pandas vs. Spark execution paths diverge: some Expectations only work on Spark; seeexecution_engineparameter in expectation definitions. 3) Docker services must be running for integration tests:pytest tests/integration/requires databases to be up; use Docker Compose files inassets/docker/. 4) GX_PYTHON_EXPERIMENTAL env var needed for Python 3.14+: set during install to enable experimental support. 5) Jinja templates in docs: if modifying documentation, rebuild via.readthedocs.ymlconfig, not direct file edits.
🏗️Architecture
💡Concepts to learn
- Expectations (Data Quality Tests) — Core abstraction in GX—every validation is an Expectation; understanding the Expectation class hierarchy and execution engine pattern is essential to extend GX
- Execution Engines (Pandas vs. Spark vs. SQL) — GX generates SQL, Pandas code, or Spark code depending on backend; knowing which engine is active determines how Expectations are evaluated
- Validation Results & Documentation Rendering — GX auto-generates HTML/Markdown docs from validation runs; understanding the result schema and Jinja templating in
docs/is critical for customizing reports - Data Sources & Datasources — Abstraction over Pandas DataFrames, Spark RDDs, SQL tables, DuckDB files, etc.; adding support for a new database requires implementing a new Datasource class
- Validator Orchestration & Execution Flow — Validator class delegates Expectations to execution engines; understanding the validation lifecycle (load → validate → document) is essential for debugging and extending
- Schema & Column Constraints — GX validates schema (column names, types, nullability) and data constraints (ranges, patterns, cardinality); core feature for catching regressions in data pipelines
- Docker Compose Multi-Database Testing — GX supports 50+ data sources; CI/CD spins up databases in
assets/docker/for integration testing—essential to understand when adding or modifying data source logic
🔗Related repos
soda-sql/soda-sql— Alternative data quality framework similar to GX; compare architecture for schema validation and data profiling approachesdbt-labs/dbt-core— Complementary data transformation tool; GX often runs post-dbt to validate transformations; many teams integrate GX with dbt via dbt-expectations packagesqlalchemy/sqlalchemy— Core dependency for SQL-based data sources; GX's SQLAlchemy datasource directly wraps this for cross-database supportapache/spark— GX Spark datasource wraps PySpark; necessary for validating large-scale distributed datagreat-expectations/great_expectations_cloud— Commercial cloud offering built on GX Core; extends GX with hosted validation, alerting, and centralized data quality dashboards
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add dependency security and license scanning GitHub Action
The repo has multiple package.json files (e.g., assets/scripts/AlgoliaScripts/package.json) and Python dependencies, but there's no visible automated security scanning in the workflows directory. Adding Dependabot or similar scanning would catch vulnerable dependencies early, especially important for the Algolia integration scripts that handle external API credentials via dotenv.
- [ ] Create .github/workflows/dependency-scan.yml for Python dependencies using Dependabot or pip-audit
- [ ] Create .github/workflows/npm-security.yml for Node.js dependencies in assets/scripts/AlgoliaScripts/
- [ ] Add status badge to README.md linking to security scan results
- [ ] Document in CONTRIBUTING_CODE.md or CONTRIBUTING_WORKFLOWS.md how contributors should address security findings
Add integration tests for Algolia upload scripts with mocked API
The assets/scripts/AlgoliaScripts/ directory contains upload_s3_expectation_to_algolia.js and upload_s3_packages_to_algolia.js scripts with npm scripts defined, but there's no visible test coverage. These scripts handle credential management (dotenv) and external API calls, which are high-risk for bugs and security issues.
- [ ] Create assets/scripts/AlgoliaScripts/tests/ directory with test files
- [ ] Add unit tests for upload_s3_expectation_to_algolia.js mocking algoliasearch and node-fetch
- [ ] Add unit tests for upload_s3_packages_to_algolia.js with same mocking strategy
- [ ] Update package.json to add 'test' script and testing dependencies (jest, nock for HTTP mocking)
- [ ] Add CI step in .github/workflows/ci.yml to run 'npm test' in AlgoliaScripts directory
Document Docker Compose setup and add validation workflow
The repo contains 10+ docker-compose.yml files for different data sources (ClickHouse, Databricks, MSSQL, MySQL, PostgreSQL, etc.) in assets/docker/, but there's no documented process for validating they work or guidance for contributors on when/how to use them. A validation workflow and setup guide would improve contributor experience.
- [ ] Create assets/docker/README.md documenting each data source Docker setup with prerequisites and startup commands
- [ ] Add .github/workflows/docker-compose-lint.yml to validate all docker-compose.yml files using docker-compose config
- [ ] Create assets/docker/test-compose.sh script to verify a subset of compose files start without error
- [ ] Update IDE_SETUP_TIPS.md to reference the Docker setup documentation for local development
- [ ] Document in CONTRIBUTING_CODE.md which Docker services are required for which test suites
🌿Good first issues
- Add test coverage for a specific data source integration (e.g., review
tests/integration/datasources/for gaps in ClickHouse, Databricks, or SingleStore test cases, then add integration tests inassets/docker/{datasource}/docker-compose.yml) - Document a missing Expectation class: identify undocumented Expectations in
great_expectations/expectations/and add docstrings + examples indocs/docusaurus/docs/reference/expectations/ - Implement a new Expectation type: e.g.,
expect_column_values_to_match_regex_listby extendinggreat_expectations/expectations/column_expectations.pyand adding tests intests/expectations/
⭐Top contributors
Click to expand
Top contributors
- @joshua-stauffer — 53 commits
- @wookasz — 8 commits
- @dependabot[bot] — 7 commits
- @klavavej — 7 commits
- @tyler-hoffman — 7 commits
📝Recent commits
Click to expand
Recent commits
b77da1f— [BUGFIX] Preserve boolean values passed to add_csv_asset (fixes #11206) (#11867) (EshwarCVS)6c1e971— [RELEASE] 1.17.1 (#11866) (joshua-stauffer)14476af— [DOCS] Backfill 1.17.0 changelog entry (#11865) (joshua-stauffer)4364e42— [MAINTENANCE] Bump postcss from 8.5.6 to 8.5.12 in /docs/docusaurus (#11859) (dependabot[bot])abcf573— [BUGFIX] Data Docs uses vulnerable jQuery 3.4.1 (#11856) (joshua-stauffer)d72e298— [BUGFIX] Use uuid4 for test datasource names to avoid global-RNG collisions (#11862) (joshua-stauffer)53d3fcc— [BUGFIX] Avoid .toPandas() in Spark multicolumn unexpected values (#11633) (#11861) (joshua-stauffer)381cde8— [MAINTENANCE] Shard snowflake marker-tests + xdist for bigquery/databricks (#11850) (joshua-stauffer)08c4774— [BUGFIX] Pydantic Field alias not respected during expectation validation (#11854) (joshua-stauffer)c004117— [BUGFIX] Fix docs-snippets CI broken by sqlalchemy-redshift 1.0.0 (#11857) (joshua-stauffer)
🔒Security observations
- High · Outdated node-fetch Dependency —
assets/scripts/AlgoliaScripts/package.json. The package.json specifies node-fetch ^2.6.7, which is an older major version. Node-fetch v2.x has known security vulnerabilities and is no longer maintained. The project should migrate to node-fetch v3.x or later, or consider using the built-in fetch API available in Node.js 18+. Fix: Update node-fetch to ^3.x or use native Node.js fetch API (Node.js 18+). Run 'npm audit' to identify specific CVEs. - High · Potential Hardcoded Secrets via dotenv —
assets/scripts/AlgoliaScripts/package.json and related scripts. The project uses 'dotenv' (^16.0.2) for environment management. The presence of this dependency combined with scripts that interact with external services (Algolia, S3) suggests potential .env files may exist containing API keys. If .env files are not properly gitignored or accidentally committed, sensitive credentials could be exposed. Fix: Ensure .env files are in .gitignore, use environment-based secret management in CI/CD, rotate any exposed credentials, and implement pre-commit hooks to prevent accidental credential commits. - High · AWS S3 Credentials Exposure Risk —
assets/scripts/*.sh files (S3-related scripts). Shell scripts (copy_s3_staging_to_prod.sh, download_json_from_s3.sh) interact with S3 but their actual implementation is not visible. These scripts likely contain or reference AWS credentials that could be exposed if not properly managed. Fix: Use AWS IAM roles and temporary credentials instead of long-term keys. Implement proper credential rotation policies and audit S3 access logs regularly. - Medium · Insecure Package Versions with Known Issues —
assets/scripts/AlgoliaScripts/package.json. The dependency 'algoliasearch' at ^4.12.1 may have known vulnerabilities. Regular security audits should be performed. Additionally, 'remove-markdown' is a lesser-known package that may not receive frequent security updates. Fix: Run 'npm audit' regularly, update dependencies to latest secure versions, and consider using 'npm audit fix' or 'npm audit fix --force'. Enable Dependabot or similar automated dependency scanning. - Medium · Missing Security Configuration in CI/CD —
.github/workflows/ directory. Multiple GitHub Actions workflows are present (.github/workflows/), but the actual workflow files are not fully visible in the provided context. The presence of 'ge_releaser-publish.yml' and similar deployment workflows suggests potential for credential exposure if secrets are not properly managed. Fix: Audit all GitHub Actions workflows to ensure secrets are passed via GitHub Secrets (not hardcoded), use OpenID Connect where possible, and implement least-privilege access for deployment jobs. - Medium · Docker Infrastructure Security Concerns —
assets/docker/*/docker-compose.yml. Multiple Docker Compose files are present across different database systems (MySQL, PostgreSQL, MSSQL, etc.). These files may contain hardcoded database credentials, default passwords, or exposed ports without proper security constraints. Fix: Review all docker-compose.yml files for hardcoded credentials, use .env files with non-default passwords, restrict port exposure, and implement network segmentation. Never commit sensitive credentials to version control. - Medium · SQL Injection Risk in Database Integration —
great_expectations codebase (not fully visible). As a data validation framework that works with multiple databases (evident from the docker-compose files), the codebase likely contains SQL query execution. Without visible code review, there's potential for SQL injection if user input isn't properly parameterized. Fix: Ensure all SQL queries use parameterized statements/prepared queries. Never concatenate user input directly into SQL queries. Implement input validation and use ORM features where available. - Low · Nginx Configuration Exposure —
assets/docker/mercury/nginx.conf. The presence of assets/docker/mercury/nginx.conf suggests a web server configuration. If this contains sensitive information or insecure defaults, it could pose security risks. Fix: Review nginx configuration for security headers (CS
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.