RepoPilotOpen in app →

facebookresearch/ParlAI

A framework for training and evaluating AI models on a variety of openly available dialogue datasets.

Healthy

Healthy across all four use cases

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • 23+ active contributors
  • Distributed ownership (top contributor 27% of recent commits)
  • MIT licensed
Show all 6 evidence items →
  • CI configured
  • Tests present
  • Stale — last commit 3y ago

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/facebookresearch/parlai)](https://repopilot.app/r/facebookresearch/parlai)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/facebookresearch/parlai on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: facebookresearch/ParlAI

Generated by RepoPilot · 2026-05-07 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/facebookresearch/ParlAI shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • 23+ active contributors
  • Distributed ownership (top contributor 27% of recent commits)
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Stale — last commit 3y ago

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live facebookresearch/ParlAI repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/facebookresearch/ParlAI.

What it runs against: a local clone of facebookresearch/ParlAI — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in facebookresearch/ParlAI | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 946 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>facebookresearch/ParlAI</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of facebookresearch/ParlAI. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/facebookresearch/ParlAI.git
#   cd ParlAI
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of facebookresearch/ParlAI and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "facebookresearch/ParlAI(\\.git)?\\b" \\
  && ok "origin remote is facebookresearch/ParlAI" \\
  || miss "origin remote is not facebookresearch/ParlAI (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "README.md" \\
  && ok "README.md" \\
  || miss "missing critical file: README.md"
test -f "parlai/core/agents.py" \\
  && ok "parlai/core/agents.py" \\
  || miss "missing critical file: parlai/core/agents.py"
test -f "parlai/core/worlds.py" \\
  && ok "parlai/core/worlds.py" \\
  || miss "missing critical file: parlai/core/worlds.py"
test -f "parlai/core/teachers.py" \\
  && ok "parlai/core/teachers.py" \\
  || miss "missing critical file: parlai/core/teachers.py"
test -f "parlai/core/params.py" \\
  && ok "parlai/core/params.py" \\
  || miss "missing critical file: parlai/core/params.py"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 946 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~916d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/facebookresearch/ParlAI"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

ParlAI is a unified Python framework for training, evaluating, and deploying dialogue AI models across 100+ datasets (PersonaChat, DailyDialog, SQuAD, etc.) with a standardized API. It provides reference implementations for retrieval and transformer-based models, a pretrained model zoo, and native Amazon Mechanical Turk integration for data collection and human evaluation—eliminating dataset format fragmentation that plagues NLP research. Multi-layered monorepo: parlai/ contains the core Python framework (agents, tasks, models, zoo), docs/source/ provides Sphinx documentation with sample model cards, .github/workflows and .circleci/config.yml handle CI, and a Node.js server (package.json with Express/ws) handles MTurk frontend and WebSocket communication. Tasks live in parlai/tasks/, agents in parlai/agents/.

👥Who it's for

NLP researchers and engineers building conversational AI systems who need to benchmark models across diverse dialogue datasets without writing custom data loaders for each one. Also includes crowdsourcing practitioners using MTurk for dataset annotation and human evaluation studies.

🌱Maturity & risk

Actively developed and production-ready. The repo shows substantial scale (6.9M Python LOC), comprehensive CI/CD (CircleCI, GitHub Actions lint workflows), code coverage tracking (codecov.yml), and pre-commit hooks (.pre-commit-config.yaml). The NEWS.md and multiple published papers indicate active research output.

Low-to-medium risk for core dialogue training. Dependencies are well-maintained (Express, Nunjucks, ws for Node server components). The main risk is scope creep: the framework spans model training, MTurk orchestration, web servers, and visualization—touching many subsystems increases maintenance burden. No obvious single-maintainer bottleneck visible, but broad API surface means breaking changes could cascade.

Active areas of work

Active dialogue and safety research: the repo includes sample model cards (docs/sample_model_cards/blenderbot_90M/ with safety heatmaps), pre-commit linting, and stale issue management (workflows/stale.yml). The presence of IMAGE_CHAT, CLEVR, and VisDial tasks indicates ongoing multimodal dialogue expansion.

🚀Get running

git clone https://github.com/facebookresearch/ParlAI.git
cd ParlAI
pip install -e .
python examples/display_data.py --task personachat

For MTurk server: cd parlai/mturk && npm install && npm start

Daily commands: For core training: python parlai/scripts/train_model.py -t personachat -m transformer/generator (inferred from common ParlAI patterns). For the MTurk server: npm start in the Node.js root (see package.json). See .circleci/config.yml for full test suite invocation.

🗺️Map of the codebase

  • README.md — Entry point documenting ParlAI's mission as a dialogue AI framework, core concepts, and quick-start paths for all contributors.
  • parlai/core/agents.py — Defines the Agent base class and core dialogue mechanics—every custom agent, teacher, or model inherits from abstractions here.
  • parlai/core/worlds.py — Implements the World and BatchWorld orchestration layer that manages multi-agent interaction loops and data flow during training/evaluation.
  • parlai/core/teachers.py — Defines Teacher and DialogTeacher classes that load and serve dialogue data; understanding task structure depends on this.
  • parlai/core/params.py — Centralized parameter/argument parsing system used across all agents, tasks, and scripts—critical for configuration management.
  • parlai/tasks — Directory containing all task implementations and dataset loaders; adding new dialogue tasks requires understanding this structure.
  • parlai/agents/torch_agent/torch_agent.py — Base class for neural models; most transformer-based and learning agents extend this with PyTorch integration.

🛠️How to make changes

Add a New Dialogue Task/Dataset

  1. Create a new directory under parlai/tasks/{task_name} with __init__.py (parlai/tasks/__init__.py)
  2. Implement a Teacher class extending parlai.core.teachers.DialogTeacher in parlai/tasks/{task_name}/agents.py (parlai/core/teachers.py)
  3. Implement setup_data() method to yield (text, label) tuples, and optionally override custom_evaluation() for task-specific metrics (parlai/tasks/{task_name}/agents.py)
  4. Add dataset download logic in build_data() function in the same agents.py file, or defer to parlai/core/build_data.py (parlai/core/build_data.py)
  5. Register task in parlai/tasks/{task_name}/__init__.py and verify with parlai display_data -t {task_name} (parlai/tasks/__init__.py)

Implement a Custom Neural Agent (PyTorch)

  1. Create a new agent file (e.g., parlai/agents/my_model/my_agent.py) extending parlai.agents.torch_agent.torch_agent.TorchAgent (parlai/agents/torch_agent/torch_agent.py)
  2. Implement build_model() to create your PyTorch module and forward() for inference (parlai/agents/torch_agent/torch_agent.py)
  3. Override compute_loss() for custom training objectives and batch_act() for batched inference (parlai/agents/torch_agent/torch_agent.py)
  4. Define agent arguments in add_cmdline_args() class method to allow CLI configuration via parlai/core/params.py (parlai/core/params.py)
  5. Test with parlai train_model -m my_model -t your_task -n 100 --validation_every_n_secs 60 (parlai/core/script.py)

Add a New Evaluation Metric

  1. Create a new Metric class in parlai/core/metrics.py extending parlai.core.metrics.Metric (parlai/core/metrics.py)
  2. Implement __add__() for accumulation and value() to return the final metric value (parlai/core/metrics.py)
  3. Register the metric in your Teacher's custom_evaluation() method or return it from agent's eval_step() callback (parlai/core/teachers.py)
  4. Verify integration by running parlai eval_model -m your_agent -t your_task and checking logs (parlai/core/script.py)

Create a Crowdsourcing Evaluation (MTurk)

  1. Create a new task directory under parlai/crowdsourcing/tasks/{task_name}/ with blueprint and task configuration (parlai/crowdsourcing/tasks)
  2. Define task YAML config in parlai/crowdsourcing/tasks/{task_name}/task_config.yaml specifying prompt, validations, and reward (parlai/crowdsourcing/tasks)
  3. Implement frontend in parlai/crowdsourcing/tasks/{task_name}/frontend/ using React or provided templates (parlai/crowdsourcing/tasks)
  4. Launch with parlai mturk --task {task_name} after setting up AWS credentials; logs and results stored in mturk_data/ (parlai/crowdsourcing)

🔧Why these technologies

  • PyTorch — Standard deep learning framework for neural dialogue models, with distributed training and autograd support; TorchAgent abstracts training loops.
  • Python + argparse-like params.py — Unified configuration system allows reproducible experiments; every agent, task, and script inherits parameter handling.
  • Task/Teacher abstraction — Decouples data loading from model training; makes it trivial to swap datasets without rewriting agent code.
  • World orchestration — Encapsulates multi-agent dialogue loops, enabling flexible evaluation (single-agent, batched, crowdsourced) without agent modifications.
  • Pluggable metrics (BLEU, F1, ROUGE, safety scores) — Task-specific evaluation without modifying core training code; metrics accumulated and reported per-batch or epoch.

⚖️Trade-offs already made

  • Framework uses synchronous, Python-based training loops rather than async/distributed by default

    • Why: Simplicity and debuggability for research; single-machine experimentation is the primary use case.
    • Consequence: Large-scale distributed training requires manual integration with PyTorch DistributedDataParallel; not a built-in, turnkey feature.
  • Teachers yield one example at a time; batching handled by World

    • Why: Decouples dataset logic from batch construction; easier to implement custom sampling strategies.
    • Consequence: Small per-example overhead; not optimal for very large batch sizes or heterogeneous data types without custom optimizations.
  • Dialogue stored in action-observation dictionaries (flexible but untyped)

    • Why: Flexible for diverse tasks (QA
    • Consequence: undefined

🪤Traps & gotchas

Dataset downloads: Most tasks auto-download data on first run to ~/.parlai/downloads/; requires ~50GB free space for all datasets. GPU memory: Training large models (e.g., TransformerGenerator with beam search) requires careful batchsize tuning; defaults often exceed typical VRAM. MTurk credentials: MTurk server requires AWS credentials in environment (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY); missing credentials silently disable MTurk tasks. Linting requirements: pre-commit hooks enforce Black, Flake8, and custom ParlAI style checks; commits will fail if code doesn't match (run autoformat.sh before committing). CUDA version mismatch: PyTorch binary may require specific CUDA version; Dockerfile and environment setup docs should be consulted.

🏗️Architecture

💡Concepts to learn

  • World Model (Agent Orchestration) — ParlAI's World/BatchWorld classes define how agents interact in training and evaluation episodes; understanding serial vs. batch world execution is critical for debugging dialogue flow and optimizing throughput
  • Observation-Action Dictionary Protocol — The core Observable/Action dict interface standardizes how agents communicate (text, reward, done signals); all custom agents and tasks must implement this protocol
  • Beam Search Decoding — Reference dialogue models use beam search during inference to generate diverse responses; ParlAI's TorchGeneratorAgent provides optimized CUDA kernels for this; understanding beam width tradeoffs (speed vs. quality) is essential for deployment
  • Metric Computation in Batches — ParlAI evaluates BLEU, ROUGE, F1, METEOR across batch predictions; metrics must be aggregated correctly across distributed/multi-GPU eval; off-by-one errors in reduction logic cause spurious benchmark differences
  • Teacher-Student Paired Learning — Many dialogue tasks pair a teacher (data provider) with a student (learner agent); ParlAI's Task base class encodes this asymmetry; critical for supervised learning and imitation learning benchmarks
  • Data Loader Lazy Evaluation — Tasks in ParlAI use lazy loading (download + parse on-demand) rather than precomputing all splits; this reduces disk footprint but means first run is slow; understanding caching behavior prevents surprising delays
  • Amazon Mechanical Turk Task Lifecycle — ParlAI's MTurk integration handles worker assignment, validation, bonus payment, and rejection workflows; the Express server manages WebSocket lifecycle and worker state persistence; broken task configuration can silently reject valid HITs
  • huggingface/transformers — Standard pretrained transformer models and tokenizers used by ParlAI agents; integrates seamlessly for model architecture
  • pytorch/pytorch — Core deep learning framework underlying all neural agent training and inference in ParlAI
  • facebookresearch/Blender — Official companion repo containing the BlenderBot model card, pretraining details, and safety evaluation benchmarks referenced in ParlAI model zoo
  • facebookresearch/diplom — Related dialogue evaluation framework also by Facebook Research; often used to measure ParlAI model quality
  • conversationai/ConvAI — Annual dialogue competition using ParlAI as the official leaderboard and evaluation platform

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add Node.js security vulnerability scanning workflow

The package.json shows Node.js dependencies with pinned versions from 2017 (Express 4.15.3, body-parser 1.17.2, ws 5.2.3). There is no GitHub Actions workflow for npm audit or snyk scanning. This is critical for a framework used in dialogue systems that may handle user data. Adding a workflow would catch dependency vulnerabilities early and build trust with users.

  • [ ] Create .github/workflows/npm-audit.yml to run 'npm audit' on dependency updates
  • [ ] Add workflow to check for known vulnerabilities in ws, express, and body-parser versions
  • [ ] Configure the workflow to fail CI if high/critical vulnerabilities are found
  • [ ] Document security scanning expectations in CONTRIBUTING.md

Add integration tests for chat_service module

The file structure shows docs/source/chat_service.md exists, indicating a chat service feature, but there are no visible test files for chat_service functionality in the partial file listing. Given the Node.js server dependencies (ws, express, body-parser) and the presence of .github/ISSUE_TEMPLATE/mturk-bug.md, the chat service likely handles WebSocket connections and HTTP routing. Adding integration tests would ensure reliability.

  • [ ] Create tests/test_chat_service.py to test WebSocket connections via the server
  • [ ] Add test cases for HTTP endpoint routing with body-parser
  • [ ] Test message serialization/deserialization through the nunjucks templating layer
  • [ ] Add test fixtures for async-lock behavior under concurrent requests
  • [ ] Reference new tests in conftest.py if pytest plugins are needed

Create CI workflow for linting Node.js server code and dependencies

The repo has .github/workflows/lint.yml and .github/scripts/git-secrets for Python code quality, but the Node.js server (server.js referenced in package.json) lacks parallel linting coverage. The .flake8 and .pre-commit-config.yaml suggest strong code quality standards. Adding Node.js linting ensures consistency across the mixed Python/Node stack.

  • [ ] Extend .github/workflows/lint.yml to include eslint/prettier for Node.js files
  • [ ] Add eslint configuration (.eslintrc.json) for consistent code style across server.js and related files
  • [ ] Configure pre-commit hook in .pre-commit-config.yaml to lint Node.js on commits
  • [ ] Add Node.js dependency audit check (npm audit) to lint workflow

🌿Good first issues

  • Add comprehensive type hints to parlai/core/agents.py and parlai/core/worlds.py (currently sparse; would improve IDE support and catch bugs early). Start by annotating Observable/Action dict keys and World.parley() return types.
  • Write integration tests for the 5-10 most-used tasks (PersonaChat, DailyDialog, SQuAD) in tests/integration/ to verify data loading, batching, and evaluation metrics don't regress across versions. These tasks lack end-to-end test coverage.
  • Expand docs/source/ with architecture decision records (ADRs) explaining why Agent/World/Task split exists, when to use BatchWorld vs. serial World, and common pitfalls in custom task implementation. Currently missing from formal docs.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 6c80241 — v0 litellm (#5076) (ishaan-jaff)
  • c437a91 — Bump @babel/traverse in /parlai/crowdsourcing/tasks/acute_eval/webapp (#5085) (dependabot[bot])
  • 7eb9d76 — Bump gitpython from 3.1.35 to 3.1.37 (#5082) (dependabot[bot])
  • cd23e37 — style: typo in help argument (#5074) (guspan-tanadi)
  • 2ce94e6 — Bump postcss in /parlai/crowdsourcing/tasks/acute_eval/webapp (#5080) (dependabot[bot])
  • 7387289 — Bump pillow from 9.3.0 to 10.0.1 (#5079) (dependabot[bot])
  • 585d32b — Bump gitpython from 3.1.34 to 3.1.35 (#5078) (dependabot[bot])
  • ce7d7ed — Bump nunjucks from 3.0.1 to 3.2.4 in /parlai/chat_service/core/server (#5077) (dependabot[bot])
  • bbcb45e — Bump gitpython from 3.1.32 to 3.1.34 (#5075) (dependabot[bot])
  • 24daddb — Add OpenAI Chat Completion Agent (GPT-3.5/GPT-4) (#5061) (rguan1)

🔒Security observations

  • High · Outdated and Vulnerable Dependencies — package.json. The package.json contains multiple dependencies with known vulnerabilities. Express 4.15.3 (released 2017) and ws 5.2.3 have multiple CVEs including remote code execution, denial of service, and regular expression denial of service attacks. Body-parser 1.17.2 is also outdated. Fix: Update all dependencies to latest versions: express (latest), ws (latest), body-parser (latest), nunjucks (latest), async-lock (latest), and wait-until (latest). Run npm audit fix and implement automated dependency scanning.
  • High · Insecure Conda/Pip Package Installation — Dockerfile (pip install line). The Dockerfile uses pip and conda install commands without pinning exact versions for most packages except a few. This introduces supply chain risks. Additionally, boto3==1.17.95 and botocore==1.20.95 are outdated and may contain security vulnerabilities. Fix: Pin all package versions explicitly. Update boto3 and botocore to latest versions. Use pip-compile or similar to generate locked dependency files. Consider using a requirements.txt with exact hashes.
  • High · Missing Base Image Digest Verification — Dockerfile (FROM statement). The Dockerfile uses 'nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04' without a content digest (SHA256). This allows image substitution attacks where a malicious image could be pulled instead. Fix: Use image digests: FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04@sha256:[digest]. Verify the correct digest from the official registry before committing.
  • Medium · Unverified Remote Script Execution — Dockerfile (curl and bash anaconda_installer.sh). The Dockerfile downloads and executes Anaconda installer script from a remote URL without verification (no checksum validation). This is a classic supply chain attack vector. Fix: Verify the downloaded installer using SHA256 checksums. Store the checksum in the Dockerfile and verify before execution: 'echo "expected_hash anaconda_installer.sh" | sha256sum -c -'
  • Medium · Outdated Ubuntu Base Image — Dockerfile (FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04). Ubuntu 18.04 (Bionic) is an older LTS release and may lack security patches compared to newer versions. It reached standard support end on May 31, 2023. Fix: Upgrade to Ubuntu 20.04 LTS or 22.04 LTS. Verify CUDA compatibility with your pytorch and other dependencies. Consider using nvidia/cuda:11.8-devel-ubuntu22.04 or later.
  • Medium · Insecure apt Package Installation — Dockerfile (apt install line). The Dockerfile uses 'apt install -y' without updating package lists first (apt update), and doesn't specify package versions. This could install vulnerable versions of git and curl. Fix: Combine apt update with apt install in a single RUN statement to ensure fresh package list. Optionally pin specific package versions and use --no-install-recommends to reduce attack surface.
  • Medium · Spacy Model Downloaded Without Verification — Dockerfile (spacy download command). The Dockerfile downloads the spacy language model using 'python -m spacy download' without verifying authenticity or integrity. Fix: Verify downloaded models using checksums. Consider pre-building a custom Docker image with the models included, or implement hash verification in the download process.
  • Low · Incomplete Dockerfile — Dockerfile (end of file). The Dockerfile appears to be incomplete - the last line reads '# Download the ParlAI Githu' suggesting the file is cut off. This may indicate missing security-critical steps or an incomplete build process. Fix: Complete the Dockerfile. Ensure all installation steps are finalized. Add security best practices such as: creating a non-root user, setting appropriate file permissions, and implementing health checks.
  • Low · No Security Headers Configuration Visible — undefined. The Fix: undefined

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · facebookresearch/ParlAI — RepoPilot