stanfordnlp/CoreNLP
CoreNLP: A Java suite of core NLP tools for tokenization, sentence segmentation, NER, parsing, coreference, sentiment analysis, etc.
Single-maintainer risk — review before adopting
weakest axiscopyleft license (GPL-3.0) — review compatibility; top contributor handles 98% of recent commits
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 1d ago
- ✓3 active contributors
- ✓GPL-3.0 licensed
Show all 8 evidence items →Show less
- ✓CI configured
- ✓Tests present
- ⚠Small team — 3 contributors active in recent commits
- ⚠Single-maintainer risk — top contributor 98% of recent commits
- ⚠GPL-3.0 is copyleft — check downstream compatibility
What would change the summary?
- →Use as dependency Concerns → Mixed if: relicense under MIT/Apache-2.0 (rare for established libs)
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Forkable" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/stanfordnlp/corenlp)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/stanfordnlp/corenlp on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: stanfordnlp/CoreNLP
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/stanfordnlp/CoreNLP shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
WAIT — Single-maintainer risk — review before adopting
- Last commit 1d ago
- 3 active contributors
- GPL-3.0 licensed
- CI configured
- Tests present
- ⚠ Small team — 3 contributors active in recent commits
- ⚠ Single-maintainer risk — top contributor 98% of recent commits
- ⚠ GPL-3.0 is copyleft — check downstream compatibility
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live stanfordnlp/CoreNLP
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/stanfordnlp/CoreNLP.
What it runs against: a local clone of stanfordnlp/CoreNLP — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in stanfordnlp/CoreNLP | Confirms the artifact applies here, not a fork |
| 2 | License is still GPL-3.0 | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 31 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of stanfordnlp/CoreNLP. If you don't
# have one yet, run these first:
#
# git clone https://github.com/stanfordnlp/CoreNLP.git
# cd CoreNLP
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of stanfordnlp/CoreNLP and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "stanfordnlp/CoreNLP(\\.git)?\\b" \\
&& ok "origin remote is stanfordnlp/CoreNLP" \\
|| miss "origin remote is not stanfordnlp/CoreNLP (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(GPL-3\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"GPL-3\\.0\"" package.json 2>/dev/null) \\
&& ok "license is GPL-3.0" \\
|| miss "license drift — was GPL-3.0 at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "src/edu/stanford/nlp/pipeline/StanfordCoreNLP.java" \\
&& ok "src/edu/stanford/nlp/pipeline/StanfordCoreNLP.java" \\
|| miss "missing critical file: src/edu/stanford/nlp/pipeline/StanfordCoreNLP.java"
test -f "build.gradle" \\
&& ok "build.gradle" \\
|| miss "missing critical file: build.gradle"
test -f "src/edu/stanford/nlp/pipeline/AnnotationPipeline.java" \\
&& ok "src/edu/stanford/nlp/pipeline/AnnotationPipeline.java" \\
|| miss "missing critical file: src/edu/stanford/nlp/pipeline/AnnotationPipeline.java"
test -f "src/edu/stanford/nlp/ling/CoreAnnotations.java" \\
&& ok "src/edu/stanford/nlp/ling/CoreAnnotations.java" \\
|| miss "missing critical file: src/edu/stanford/nlp/ling/CoreAnnotations.java"
test -f "README.md" \\
&& ok "README.md" \\
|| miss "missing critical file: README.md"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 31 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~1d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/stanfordnlp/CoreNLP"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Stanford CoreNLP is a production Java NLP toolkit providing end-to-end linguistic analysis: tokenization, sentence segmentation, part-of-speech tagging, named entity recognition (NER), dependency/constituency parsing, coreference resolution, and sentiment analysis. It processes raw text through a unified pipeline (edu.stanford.nlp.pipeline.StanfordCoreNLP) to extract syntactic structure, semantic relationships, and entity mentions across 8+ languages including English, Arabic, Chinese, French, German, Hungarian, Italian, and Spanish. Monolithic single-package structure: src/ contains all Java source organized under edu/stanford/nlp/ with subsystems for pipeline/ (core orchestration), ling/ (linguistic data structures), parser/, sentiment/, ie/ (information extraction), kbp/ (knowledge base population), dcoref/ (coreference), and more. Data resources live in data/edu/stanford/nlp/ with model files and test datasets (iris classifier data, Arabic/Chinese test sets, KBP semgrex rules). Build artifacts use both Gradle (build.gradle) and Ant (build.xml) for compatibility.
👥Who it's for
Academic researchers, industry NLP engineers, and government agencies building text understanding systems. Specifically: researchers prototyping linguistic models, production teams needing robust multilingual NLP without building from scratch, and developers integrating pre-trained annotators into downstream applications (sentiment analysis pipelines, knowledge extraction, document understanding).
🌱Maturity & risk
Highly mature and production-ready. The codebase is 30M+ lines of Java with a full CI/CD pipeline (GitHub Actions in .github/workflows/run-tests.yaml and .github/workflows/run-slow-itests.yaml), Maven Central distribution at version 4.5.10, and GPLv2 licensing indicating long-term institutional commitment. However, commit recency is not visible in provided data, so verify recent activity on GitHub directly.
Low technical risk but licensing is a hard constraint: GPLv2 (full, not LGPL) means it cannot be used in proprietary software you distribute—only in open-source or internal tools. Dependency risk is mitigated by bundling jars in the lib/ directory rather than relying on external package managers. The monolithic design (single unified JAR) means all modules are versioned together, so breaking changes in one annotator (e.g., parser model format) affect the entire release.
Active areas of work
Active development on version 4.5.10 with automated test workflows. Recent work includes slow integration tests (.github/workflows/run-slow-itests.yaml) suggesting performance-critical annotators under active maintenance. The presence of semantic graph parsing rules in data/edu/stanford/nlp/kbp/en/semgrex/ (org/person relation extraction) indicates ongoing KBP module enhancements.
🚀Get running
Clone and build with Gradle: git clone https://github.com/stanfordnlp/CoreNLP.git && cd CoreNLP && gradle build. Or use the pre-built jar from Maven Central: add edu.stanford.nlp:stanford-corenlp:4.5.10 to your Maven/Gradle deps. To verify installation, run java -mx500m -cp stanford-corenlp-4.5.10.jar edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos -file input.txt (substitute version number; requires downloadable model jars).
Daily commands:
Command-line: java -mx2g -cp stanford-corenlp-4.5.10.jar:stanford-corenlp-4.5.10-models.jar edu.stanford.nlp.pipeline.StanfordCoreNLP -annotators tokenize,ssplit,pos,lemma,ner,parse,depparse -file input.txt. Programmatic (Java): StanfordCoreNLP pipeline = new StanfordCoreNLP(props); CoreDocument doc = new CoreDocument(text); pipeline.annotate(doc); Gradle: gradle run (runs mainClassName = "edu.stanford.nlp.pipeline.StanfordCoreNLP").
🗺️Map of the codebase
src/edu/stanford/nlp/pipeline/StanfordCoreNLP.java— Main entry point and orchestrator for the entire CoreNLP pipeline; all annotation workflows flow through this class.build.gradle— Gradle build configuration defining dependencies, version (4.5.10), compilation targets, and JAR manifest; required for building and understanding the project's footprint.src/edu/stanford/nlp/pipeline/AnnotationPipeline.java— Core abstraction that chains together annotators (tokenizer, POS tagger, parser, NER, etc.); understanding this is essential for extending CoreNLP.src/edu/stanford/nlp/ling/CoreAnnotations.java— Defines all annotation keys and metadata schema used across the codebase; every annotator and downstream consumer depends on these contracts.README.md— Project overview, supported languages, quick-start examples, and architecture overview; required reading for all contributors.CONTRIBUTING.md— Guidelines for contributing, code style, testing requirements, and pull request process specific to this project..github/workflows/run-tests.yaml— CI/CD pipeline definition showing how tests are executed on main; critical for understanding the testing and release process.
🛠️How to make changes
Add a New Annotator (e.g., custom NER model)
- Create a class implementing the
Annotatorinterface insrc/edu/stanford/nlp/pipeline/(src/edu/stanford/nlp/pipeline/Annotator.java) - Implement
annotate(Annotation annotation)to read tokens and add your annotations usingCoreAnnotationskeys (src/edu/stanford/nlp/ling/CoreAnnotations.java) - Register your annotator in
StanfordCoreNLP.javain the properties-to-annotator mapping (typically in a factory method) (src/edu/stanford/nlp/pipeline/StanfordCoreNLP.java) - Add your annotator name to the properties file passed to
StanfordCoreNLP(e.g.,annotators=tokenize,ssplit,pos,your-annotator) (build.gradle)
Add Support for a New Language
- Create a language-specific directory under
data/edu/stanford/nlp/international/with ISO 639 code (e.g.,data/edu/stanford/nlp/international/french/) (data/edu/stanford/nlp/international/arabic) - Train or port POS tagger, parser, and NER models for your language; store as serialized Java objects (
src/edu/stanford/nlp/tagger/maxent/MaxentTagger.java) - Add language-specific tokenizer or morphology rules (inherit from existing classes in
src/edu/stanford/nlp/process/) (src/edu/stanford/nlp/process/PTBTokenizer.java) - Update
StanfordCoreNLP.javato detect language and load the correct models; add language-specific defaults to properties (src/edu/stanford/nlp/pipeline/StanfordCoreNLP.java)
Add a New Relation Extraction Rule Set
- Create
.rulesfiles indata/edu/stanford/nlp/kbp/{language}/semgrex/following the existing naming convention (e.g.,per_title.rules) (data/edu/stanford/nlp/kbp/en/semgrex/per_title.rules) - Write Semgrex patterns matching dependency-parse structures; each rule maps to a relation type (
data/edu/stanford/nlp/kbp/es/semgrex/README) - Register rule files in the KBP relation extraction pipeline (typically in an annotator that loads and applies them) (
src/edu/stanford/nlp/ie/crf/CRFClassifier.java)
Add a Test for a New Feature
- Create a JUnit test class in
src/test/java/mirroring the package structure of the code being tested (.github/workflows/run-tests.yaml) - Use
StanfordCoreNLPorAnnotationPipelineto instantiate a pipeline with your annotator; pass test strings and assert on results (src/edu/stanford/nlp/pipeline/StanfordCoreNLP.java) - Run tests locally with
gradle test; CI runs automatically on PR via.github/workflows/run-tests.yaml(build.gradle)
🔧Why these technologies
- Java 8+ — Mature, cross-platform NLP framework; strong backward compatibility; widely adopted in enterprise NLP pipelines.
- Gradle build system — Modern, flexible build orchestration; supports multiple source sets and easy dependency management.
- Maximum Entropy & CRF models — Proven discriminative learning approaches for POS tagging
🪤Traps & gotchas
Model downloads: annotators like pos, ner, parse require downloading large pre-trained model jars (100s of MB) on first use; set CORENLP_HOME env var or models download to ~/.stanford-corenlp/. Memory requirements: parsing/coreference are heap-intensive; specify -Xmx2g or higher for large documents. Encoding: build.gradle explicitly sets compileJava.options.encoding = 'UTF-8'; ensure input files are UTF-8 or override. Dependency JAR locations: Gradle pulls from lib/ directory (flatDir in build.gradle), not Maven Central by default—missing JARs will silently fail at runtime. GPL licensing: cannot be used in proprietary software redistribution; verify compliance before using. Multi-language models: Chinese, Arabic, etc. require downloading separate model jars; default annotators assume English if not specified.
🏗️Architecture
💡Concepts to learn
- Dependency Parsing — CoreNLP's
depparseannotator produces grammatical relations between words (subject-verb, object, modifier); understanding this output format is essential for downstream relation extraction and semantic role labeling - Conditional Random Fields (CRF) — Underlying model for CoreNLP's POS tagger, NER, and segmenters; understanding CRF probability chains explains why annotator output is probabilistic and tunable
- Coreference Resolution — The
corefannotator identifies which pronouns/names refer to the same entity (e.g., 'John...he'); essential for discourse understanding and knowledge base construction in the KBP module - Semantic Role Labeling (SRL) — Related to dependency parsing in CoreNLP; identifies predicate-argument structure ('who did what to whom'); used in information extraction and KBP relation extraction rules
- SemGrEx (Semantic Graph Regular Expressions) — Proprietary pattern language used in
data/edu/stanford/nlp/kbp/en/semgrex/for relation extraction; understanding SemGrEx syntax is required to write custom extraction rules for new entity relations - Constituency Parsing (Phrase-Structure Parsing) — Complements dependency parsing; produces hierarchical tree of syntactic phrases; CoreNLP's
parseannotator generates PCFG trees used for grammar-based analysis and information extraction - Pipeline Architecture (Annotation Flow) — CoreNLP processes text through a series of chained annotators (tokenize → ssplit → pos → ner → parse), each adding layers of linguistic metadata to shared
CoreLabelobjects; understanding this flow is critical for debugging, extending, and correctly ordering annotators
🔗Related repos
explosion/spaCy— Alternative production NLP pipeline in Python with similar module structure (tokenization, tagging, parsing, NER); direct competitor for non-Java users preferring modern Python toolingapache/opennlp— Java-based NLP toolkit with overlapping features (tokenization, POS, NER, parsing); lighter-weight alternative if GPLv2 licensing is blocking factornltk/nltk— Python NLP library providing similar basic capabilities (tokenization, tagging, parsing); educational alternative for prototyping before scaling to Stanford CoreNLPstanfordnlp/stanza— Stanford's modern successor using neural networks and PyTorch; recommended for new projects prioritizing accuracy over Java runtime environmentstanfordnlp/CoreNLP-models— Pre-trained model repository companion; required for downloading language-specific annotators (POS taggers, NER models, parsers) used by this pipeline
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive unit tests for KBP semgrex rules validation
The repo contains 40+ semgrex rule files in data/edu/stanford/nlp/kbp/en/semgrex/ (e.g., org_alternate_names.rules, per_spouse.rules) but there are no visible test files validating their syntax, correctness, or coverage. A new contributor could create a test suite that parses and validates these rules, ensuring they compile correctly and match expected patterns. This would prevent rule regressions and improve maintainability of the KBP (Knowledge Base Population) extraction system.
- [ ] Create test/edu/stanford/nlp/kbp/KBPSemgrexRulesTest.java
- [ ] Implement tests that load all .rules files from data/edu/stanford/nlp/kbp/en/semgrex/
- [ ] Validate rule syntax using the existing semgrex parser
- [ ] Add tests to ensure each rule file produces expected match patterns on sample text
- [ ] Run tests via the existing run-tests.yaml GitHub Action workflow
Add GitHub Action workflow for slow integration tests with resource caching
The repo has .github/workflows/run-tests.yaml and run-slow-itests.yaml defined, but the slow-itests workflow likely downloads large model files repeatedly. A contributor could optimize run-slow-itests.yaml to cache downloaded CoreNLP models (trained classifiers, parsers, etc. from the data/ directory) between runs using GitHub's actions/cache, reducing CI execution time and bandwidth. This would make slow tests more practical to run on every PR.
- [ ] Review current .github/workflows/run-slow-itests.yaml implementation
- [ ] Identify which downloaded artifacts (models, trained data files) are reused across test runs
- [ ] Add actions/cache step to cache lib/ and data/ directories with a hash of build.gradle as key
- [ ] Document cache strategy in CONTRIBUTING.md
- [ ] Test that cache hits reduce workflow execution time by 30%+ on subsequent runs
Create integration tests for dcoref (coreference resolution) with expected output validation
The data/edu/stanford/nlp/dcoref/ directory contains test data (STILLALONEWOLF_*.sgm files) and expected output files (expected.txt), but there are likely no automated tests that validate coreference resolution against these reference files. A contributor could create a test suite that loads these documents, runs the coreference pipeline, and compares output to expected.txt, ensuring the module doesn't regress.
- [ ] Create test/edu/stanford/nlp/dcoref/DcorefIntegrationTest.java
- [ ] Load test documents from data/edu/stanford/nlp/dcoref/*.sgm using existing SGM parser
- [ ] Run CoreNLP pipeline with dcoref annotator enabled
- [ ] Parse expected.txt and validate resolved coreference chains match expected output
- [ ] Add parameterized tests for each .sgm file in the directory
- [ ] Integrate test into build.gradle and run-tests.yaml workflow
🌿Good first issues
- Add unit tests for
data/edu/stanford/nlp/classify/iris.*.goldfiles: verify that the Iris classifier test data loads and produces expected outputs, improving regression test coverage for the classification module.: Low—parsing CSV/gold files and asserting classifications is straightforward - Document the semgrex pattern syntax and provide examples for
data/edu/stanford/nlp/kbp/en/semgrex/: create a guide showing how to write custom relation extraction rules (org_alternate_names.rules pattern language is non-obvious to newcomers).: Medium—requires understanding SemGrEx and writing tutorial; no code changes needed - Add support for caching pre-downloaded models in GitHub Actions workflows (
.github/workflows/run-tests.yaml): download and cache model JARs to speed up CI—currently models download on each test run, slowing builds.: Low–Medium—GitHub Actions cache directive + environment variable setup
⭐Top contributors
Click to expand
Top contributors
- @AngledLuffa — 98 commits
- @manning — 1 commits
- @churow — 1 commits
📝Recent commits
Click to expand
Recent commits
1b7edd1— Version bump - remove lucene, add a :: uniq operator to Semgrex (AngledLuffa)02296fe— Remove lucene from pom files (AngledLuffa)2d64892— Add 'uniq' as a keyword to SemgrexParser.jj (AngledLuffa)95cc7ca— SemgrexMatch now supports the other getters for the SemgrexMatcher results (AngledLuffa)5d3367e— Upgrade Ssurgeon MergeNodes to treat links inside the same subtree as not relevant to which node is the head. Only cons (AngledLuffa)dc898c5— Attempt to read -pattern as a filename - presumably filenames won't typically work directly as a search string. Read in (AngledLuffa)3d2c5d4— Fix the sentence fiddling when a document boundary is reached (AngledLuffa)8baa096— Move the batch processing higher up in the file, before the compile methods (AngledLuffa)d622867— Add a test of the numbers of things returned by a batch search over a set of graphs (AngledLuffa)570af1f— Add a basic toString() to summarize a SemgrexMatch (AngledLuffa)
🔒Security observations
Stanford CoreNLP has a moderate security posture with several areas for improvement. The primary concerns are outdated build infrastructure (Gradle 3.2 from 2016), Java 8 target version, and use of fileTree-based dependency management without version control. The codebase lacks visible dependency pinning and SBOM documentation. The project has a security policy in place (SECURITY.md) for responsible disclosure. Recommendations include modernizing the build system, updating Java and Gradle versions, migrating to explicit dependency management with Maven Central, and implementing comprehensive input validation for rule-based processing.
- Medium · Outdated Gradle Version —
build.gradle (wrapper task). The build configuration specifies Gradle version 3.2, which was released in 2016 and is significantly outdated. This version likely contains known security vulnerabilities and lacks modern security features and dependency verification mechanisms. Fix: Update to the latest stable Gradle version (7.x or 8.x). Run 'gradle wrapper --gradle-version=<latest-version>' to update the Gradle wrapper. - Medium · Broad Dependency Import Without Version Pinning —
build.gradle (dependencies section). The build.gradle uses fileTree to include all JAR files in 'lib' and 'liblocal' directories without explicit version control or dependency management. This approach makes it difficult to track vulnerabilities and perform security audits on transitive dependencies. Fix: Migrate to Maven Central or similar repository with explicit version pinning. Use 'compile(group: 'groupId', name: 'artifactId', version: 'version')' syntax for all dependencies instead of fileTree. - Medium · Old Java Target Version —
build.gradle (sourceCompatibility, targetCompatibility). The project targets Java 1.8 (sourceCompatibility = 1.8), which reached end-of-life in December 2030 but is now quite old and may lack modern security features. While still supported, newer Java versions (11, 17, 21) include significant security improvements. Fix: Update to at least Java 11 (LTS) or Java 17 (LTS). Set sourceCompatibility and targetCompatibility to '11' or higher. Test thoroughly for compatibility issues. - Low · Missing Input Validation Framework —
data/edu/stanford/nlp/kbp/*/semgrex/*.rules. The file structure shows extensive rule files (.rules) for NER and semantic analysis. Without explicit evidence in the provided structure, there's potential risk for regex-based DoS or processing large inputs without validation. Fix: Implement input size limits and complexity thresholds for rule processing. Add timeout mechanisms for regex matching operations to prevent ReDoS attacks. - Low · No SBOM or Dependency Manifest Visible —
Repository root. The codebase lacks a Software Bill of Materials (SBOM) or explicit dependency manifest (pom.xml or gradle dependencies listing), making security auditing and vulnerability tracking difficult. Fix: Generate and maintain an SBOM using tools like CycloneDX or SPDX. Use 'gradle dependencies' regularly to audit transitive dependencies. Consider adding a gradle.lockfile for reproducible builds.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.