RepoPilotOpen in app →

aboul3la/Sublist3r

Fast subdomains enumeration tool for penetration testers

Mixed

Stale — last commit 2y ago

weakest axis
Use as dependencyConcerns

copyleft license (GPL-2.0) — review compatibility; last commit was 2y ago…

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isMixed

last commit was 2y ago; no CI workflows detected

  • 26+ active contributors
  • GPL-2.0 licensed
  • Stale — last commit 2y ago
Show all 7 evidence items →
  • Concentrated ownership — top contributor handles 54% of recent commits
  • GPL-2.0 is copyleft — check downstream compatibility
  • No CI workflows detected
  • No test directory detected
What would change the summary?
  • Use as dependency ConcernsMixed if: relicense under MIT/Apache-2.0 (rare for established libs)
  • Deploy as-is MixedHealthy if: 1 commit in the last 180 days

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Forkable" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Forkable
[![RepoPilot: Forkable](https://repopilot.app/api/badge/aboul3la/sublist3r?axis=fork)](https://repopilot.app/r/aboul3la/sublist3r)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/aboul3la/sublist3r on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: aboul3la/Sublist3r

Generated by RepoPilot · 2026-05-07 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/aboul3la/Sublist3r shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

WAIT — Stale — last commit 2y ago

  • 26+ active contributors
  • GPL-2.0 licensed
  • ⚠ Stale — last commit 2y ago
  • ⚠ Concentrated ownership — top contributor handles 54% of recent commits
  • ⚠ GPL-2.0 is copyleft — check downstream compatibility
  • ⚠ No CI workflows detected
  • ⚠ No test directory detected

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live aboul3la/Sublist3r repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/aboul3la/Sublist3r.

What it runs against: a local clone of aboul3la/Sublist3r — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in aboul3la/Sublist3r | Confirms the artifact applies here, not a fork | | 2 | License is still GPL-2.0 | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 673 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>aboul3la/Sublist3r</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of aboul3la/Sublist3r. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/aboul3la/Sublist3r.git
#   cd Sublist3r
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of aboul3la/Sublist3r and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "aboul3la/Sublist3r(\\.git)?\\b" \\
  && ok "origin remote is aboul3la/Sublist3r" \\
  || miss "origin remote is not aboul3la/Sublist3r (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(GPL-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"GPL-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is GPL-2.0" \\
  || miss "license drift — was GPL-2.0 at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "sublist3r.py" \\
  && ok "sublist3r.py" \\
  || miss "missing critical file: sublist3r.py"
test -f "subbrute/subbrute.py" \\
  && ok "subbrute/subbrute.py" \\
  || miss "missing critical file: subbrute/subbrute.py"
test -f "subbrute/names.txt" \\
  && ok "subbrute/names.txt" \\
  || miss "missing critical file: subbrute/names.txt"
test -f "requirements.txt" \\
  && ok "requirements.txt" \\
  || miss "missing critical file: requirements.txt"
test -f "setup.py" \\
  && ok "setup.py" \\
  || miss "missing critical file: setup.py"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 673 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~643d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/aboul3la/Sublist3r"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Sublist3r is a Python-based OSINT subdomain enumeration tool that discovers subdomains of a target domain by querying multiple search engines (Google, Yahoo, Bing, Baidu, Ask) and threat intelligence sources (Netcraft, Virustotal, ThreatCrowd, DNSdumpster) in parallel, with integrated bruteforce capability via the subbrute wordlist module. It solves the reconnaissance phase of penetration testing by automating passive and active subdomain discovery that would otherwise require manual querying across multiple sources. Monolithic single-file design: sublist3r.py is the main orchestrator importing the subbrute module (subbrute/subbrute.py) which handles bruteforce attacks using wordlists in subbrute/names.txt and resolver lists in subbrute/resolvers.txt. The tool is invoked as a CLI script with argparse-based argument parsing, with no separate library/CLI boundary.

👥Who it's for

Penetration testers, bug bounty hunters, and security researchers who need to perform comprehensive subdomain enumeration during the reconnaissance phase of security assessments. Users are typically infosec professionals with command-line familiarity running engagements against specific target domains.

🌱Maturity & risk

This is an established, production-ready tool from around 2015-2016 era (based on Python 2.7 and 3.4 support targeting), currently in maintenance mode rather than active development. The codebase is simple and focused with no visible test infrastructure, CI/CD, or recent commit history visible in the provided data—it works reliably for its narrow use case but shows limited ongoing maintenance activity.

Dependencies are minimal (requests, dnspython, argparse) and stable, reducing supply-chain risk, but the tool depends heavily on search engines and third-party APIs that may change their scraping detection or rate limiting, breaking functionality without code changes. The single-file structure (sublist3r.py as main entry point) and lack of tests mean regression detection is manual, and Python 2/3 dual support suggests it hasn't dropped legacy Python versions despite Python 2 EOL.

Active areas of work

No active development signals are visible in the provided file list. The tool appears stable and unmaintained, with the README and setup.py present but no CI configuration (no .github/workflows/, .travis.yml, or tox.ini visible), suggesting it's in archive/reference status rather than under active iteration.

🚀Get running

git clone https://github.com/aboul3la/Sublist3r.git && cd Sublist3r && pip install -r requirements.txt && python sublist3r.py -d example.com

Daily commands: python sublist3r.py -d <domain> [options] for enumeration. With bruteforce: python sublist3r.py -d <domain> -b. Use -t <threads> to control parallelism and -v for verbose real-time output.

🗺️Map of the codebase

  • sublist3r.py — Main entry point and orchestrator; contains CLI argument parsing and coordinates all enumeration sources (Google, Yahoo, Bing, Netcraft, Virustotal, ThreatCrowd, DNSdumpster, ReverseDNS, and subbrute integration).
  • subbrute/subbrute.py — Core subdomain brute-force engine integrated with Sublist3r; handles DNS resolution and wordlist-based enumeration that increases discovery likelihood.
  • subbrute/names.txt — Wordlist of common subdomain names used by the brute-force engine; directly impacts enumeration effectiveness and must be maintained for quality results.
  • requirements.txt — Dependency manifest for requests, dnspython, and argparse; critical for reproducible environments and understanding external service integration.
  • setup.py — Package configuration and distribution metadata; defines how the tool is installed and what Python versions are supported.

🧩Components & responsibilities

  • CLI Entry Point (sublist3r.py) (argparse, Python threading/async) — Parses arguments, initializes all enumeration modules, orchestrates parallel/sequential execution, aggregates and deduplicates results.
    • Failure mode: Exits with parse error; one missing required argument halts execution; does not gracefully degrade
  • Search Engine Enumeration (sublist3r.py classes) (requests, HTML/JSON parsing) — Issues HTTP requests to Google, Bing, Yahoo, Baidu, Ask, and extracts subdomains from HTML/JSON responses.
    • Failure mode: Network timeouts or rate-limiting returns no results; retry logic may not be implemented; single source failure doesn't block others
  • Threat Intelligence Queries (sublist3r.py classes) (requests, API consumption) — Queries Netcraft, VirusTotal, ThreatCrowd, DNSdumpster, and ReverseDNS for known subdomains.
    • Failure mode: API keys may be required but not validated upfront; API downtime silently skips source; no fallback on failures
  • Brute-Force Engine (subbrute/subbrute.py) (dnspython, file I/O, DNS UDP) — Loads wordlist, iterates candidates, performs DNS resolution against resolver list, returns validated results.
    • Failure mode: Malformed resolver list causes crashes; large wordlists consume significant memory; resolver timeouts cause slow execution

🔀Data flow

  • User CLIsublist3r.py — Domain name, output format, and enumeration options passed via command-line arguments
  • sublist3r.pySearch Engines (Google, Bing, Yahoo, Baidu, Ask) — Crafted HTTP requests with domain; receives HTML/JSON responses containing hyperlinks and cached subdomain references
  • sublist3r.pyThreat Intelligence APIs (VirusTotal, ThreatCrowd, Netcraft, DNSdumpster, ReverseDNS) — Domain queries; receives JSON/XML with historical DNS records and known subdomains
  • sublist3r.pysubbrute/subbrute.py — Domain name; returns set of brute-forced subdomains validated via DNS resolution
  • subbrute/subbrute.pysubbrute/names.txt & subbrute/resolvers.txt — Loads wordlist and resolver IPs at initialization
  • subbrute/subbrute.pyDNS Resolvers — UDP DNS queries for A/AAAA/CNAME records of candidate subdomains
  • All enumeration sourcessublist3r.py aggregation — All discovered subdomains collected, deduplicated, and merged into final result set
  • sublist3r.pystdout — Final deduplicated subdomain list printed to console or optional file output

🛠️How to make changes

Add a new search engine enumeration source

  1. Define a new class inheriting from the enumeration base pattern in sublist3r.py (e.g., class GoogleEnum, BingEnum); implement subdomain extraction from search results (sublist3r.py)
  2. Implement the required method (typically returns a set of discovered subdomains) that queries the search engine's API or web interface (sublist3r.py)
  3. Add the new enumeration source to the main orchestrator logic in sublist3r.py that calls all enumeration modules and aggregates results (sublist3r.py)

Expand brute-force wordlist

  1. Add new subdomain candidates to the wordlist file; each line contains one subdomain prefix (subbrute/names.txt)
  2. Optionally add new DNS resolver IPs to subbrute/resolvers.txt to improve resolution performance and redundancy (subbrute/resolvers.txt)

Add a new command-line option

  1. Define the new argument using argparse in the main function of sublist3r.py with appropriate help text and default values (sublist3r.py)
  2. Implement the logic to handle the new option within the orchestration flow, enabling or configuring enumeration modules based on the argument (sublist3r.py)

🔧Why these technologies

  • Python 2.7 & 3.4+ — Cross-version compatibility maximizes reach for penetration testers and bug hunters with varying Python environments.
  • requests library — Simplifies HTTP requests to search engines and threat intelligence APIs without manual socket handling.
  • dnspython — Enables low-level DNS operations for resolver queries and validation during brute-force enumeration.
  • argparse — Provides robust command-line argument parsing for flexible tool configuration and user experience.

⚖️Trade-offs already made

  • Multi-source OSINT enumeration over single API

    • Why: Maximizes subdomain discovery by querying diverse sources (search engines, threat feeds, DNS databases).
    • Consequence: Increased network requests, slower execution, higher rate-limiting risk; requires careful request throttling to avoid IP blocking.
  • Integrated subbrute engine over external subprocess calls

    • Why: Tighter integration and direct Python execution improves performance and control.
    • Consequence: Adds code complexity and maintenance burden; couples brute-force logic to main codebase.
  • In-memory result deduplication and aggregation

    • Why: Simple and fast for typical use cases (hundreds to thousands of subdomains).
    • Consequence: High memory overhead for very large target scopes; no persistence or incremental result storage.

🚫Non-goals (don't propose these)

  • Does not perform subdomain validation or live probing (e.g., HTTP status checks, certificate validation)
  • Does not include authentication mechanisms; relies on public APIs and search engines without credentials
  • Not designed for real-time monitoring or continuous enumeration; one-shot enumeration tool only
  • Does not include result storage in databases; outputs to stdout only
  • Not responsible for anonymization or proxy rotation; assumes responsible user behavior

⚠️Anti-patterns to avoid

  • Unbounded parallel requests without rate limitingsublist3r.py (search engine and threat intelligence enum: undefined

🪤Traps & gotchas

The tool performs web scraping against search engines which may rate-limit, block, or change HTML structure unexpectedly—breakage is silent unless verbose mode (-v) is enabled. DNS lookups in subbrute require network access and may be blocked by corporate firewalls or ISPs performing DNS filtering. Python 2 reached EOL in 2020, so running on Python 2.7 has security implications despite the tool supporting it. The tool does not handle HTTPS certificate validation errors gracefully, which may cause hangs on certain networks with SSL inspection.

🏗️Architecture

💡Concepts to learn

  • OSINT (Open Source Intelligence) — Sublist3r's core function is gathering intelligence from public sources (search engines, DNS databases, threat feeds); understanding OSINT methodology and its legal/ethical boundaries is essential for responsibly using this tool
  • Web Scraping / HTML Parsing — Sublist3r enumerates subdomains by parsing HTML responses from search engines; understanding how to extract data from unstructured HTML and handling parsing breakage when sites change structure is critical for maintaining scrapers
  • DNS Resolution / DNS Brute-Forcing — The subbrute module performs DNS lookups against a wordlist to discover subdomains; understanding DNS query behavior, resolver configuration, and distinguishing NXDOMAIN from no-answer responses is essential for the bruteforce engine
  • Thread Pooling / Parallel HTTP Requests — Sublist3r uses threading (-t flag) to parallelize search engine queries and DNS lookups; understanding thread safety, rate limiting per-thread, and avoiding thread pool exhaustion prevents enumeration slowdowns and API blocks
  • Reverse DNS / PTR Records — Sublist3r enumerates via ReverseDNS source, which queries PTR records associated with IP blocks; understanding reverse DNS zones and how ISPs publish PTR records reveals subdomain ownership relationships
  • Search Engine Query Operators — Sublist3r crafts queries using search engine-specific operators (e.g., site: for Google) to filter results; knowing operator syntax and limitations per search engine is key to understanding why some engines yield more results than others
  • Rate Limiting / Backoff Strategies — Search engines and DNS resolvers enforce rate limits; Sublist3r lacks explicit rate-limit handling, making it vulnerable to IP blocking—understanding exponential backoff and request throttling prevents blacklisting during enumeration
  • projectdiscovery/subfinder — Modern Go-based subdomain enumeration tool covering the same OSINT sources (Google, Bing, Shodan, Censys) with better performance, active maintenance, and structured JSON output
  • TheRook/subbrute — The upstream bruteforce module that Sublist3r integrates; maintaining separate awareness of subbrute updates is important as Sublist3r vendors it as subbrute/
  • OWASP/Amass — Comprehensive Go-based reconnaissance tool combining subdomain enumeration, DNS resolution, ASN enumeration, and DNS zone walking in a single framework with persistent database support
  • nahamsec/lazyrecon — Bash wrapper that orchestrates multiple tools including Sublist3r as part of a larger reconnaissance pipeline, showing how Sublist3r integrates into multi-stage security workflows
  • guelfoweb/knock — Lightweight Python subdomain brute-forcer focusing on DNS enumeration, complementary to Sublist3r's search engine approach and using similar Python stack

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add unit tests for sublist3r.py and subbrute/subbrute.py core modules

The repo has no test directory or test files. Given that sublist3r.py is the main entry point and subbrute/subbrute.py handles subdomain enumeration, adding unit tests would catch regressions in DNS resolution, search engine parsing, and bruteforce logic. This is critical for a security tool where accuracy matters.

  • [ ] Create tests/ directory with init.py
  • [ ] Add tests/test_sublist3r.py to test argument parsing and main enumeration flow in sublist3r.py
  • [ ] Add tests/test_subbrute.py to test DNS resolution logic and wordlist handling in subbrute/subbrute.py
  • [ ] Add tests/test_search_engines.py to mock and test parsing logic from search engine sources
  • [ ] Update requirements.txt to include pytest and requests-mock for testing

Add GitHub Actions workflow for Python 2.7 and Python 3.x compatibility testing

README states the tool supports both Python 2.7.x and Python 3.x, but there is no CI workflow to verify this. Adding a GitHub Actions workflow would ensure pull requests don't break compatibility across versions and catch issues with dnspython and requests across versions.

  • [ ] Create .github/workflows/test.yml with matrix testing for Python 2.7, 3.6, 3.8, 3.9, 3.10+
  • [ ] Test installation of requirements.txt on each Python version
  • [ ] Run basic smoke test: execute sublist3r.py --help and verify exit code 0
  • [ ] Run unit tests created from PR idea #1 across all Python versions
  • [ ] Add status badge to README.md

Refactor search engine enumeration logic into separate modules in sublist3r/engines/

The main sublist3r.py file likely contains inline search engine parsers (Google, Yahoo, Bing, Baidu, Ask, Netcraft, Virustotal, ThreatCrowd, DNSdumpster, ReverseDNS). This creates a monolithic file that's hard to test and maintain. Splitting into separate engine modules would improve code organization, testability, and make it easier for contributors to add new sources.

  • [ ] Create sublist3r/engines/ directory with init.py
  • [ ] Extract Google parsing logic to sublist3r/engines/google.py
  • [ ] Extract other search engines to individual files: yahoo.py, bing.py, baidu.py, ask.py, netcraft.py, virustotal.py, threatcrowd.py, dnsdumpster.py, reversedns.py
  • [ ] Create a base class or interface in sublist3r/engines/base.py for common search engine behavior
  • [ ] Update sublist3r.py to import and call engines from the new module structure
  • [ ] Update tests from PR idea #1 to test individual engine modules

🌿Good first issues

  • Add unit tests for search engine scrapers in sublist3r.py—currently no test file exists, making regressions in Bing/Google/Yahoo scraper logic undetectable when those sites change HTML structure
  • Implement a results cache/output file feature to avoid re-querying on interrupted runs—currently the tool has no resumption capability or CSV export despite being intended for penetration testers who need to document findings
  • Create a config file template (e.g., config.ini or sublist3r.conf) to allow users to manage API keys and thread counts without command-line flags—the tool currently has no configuration file support despite having many tunable parameters

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 729d649 — Fixed cannot pickle '_thread.lock' error in python 3.8+ (aboul3la)
  • da45743 — Initialise thread locks after processes start, remove unused locks (Allek Mott)
  • 6af1b8c — Fixed broken Netcraft engine (aboul3la)
  • 61ebf36 — Fixed the '<BR>' issue in CrtSearch (aboul3la)
  • 3f5fc6d — Updated subbrute resolvers list (aboul3la)
  • 0812d81 — Merge pull request #186 from AdriiiPRodri/patch-1 (aboul3la)
  • 5d33d9f — Fixed DNSdumpster engine (aboul3la)
  • 3f2fead — Fix typo in README (AdriiiPRodri)
  • 466988e — Merge pull request #172 (Add no-color option) (aboul3la)
  • fcf2429 — Merge pull request #184 from onepersentofmonster/master (aboul3la)

🔒Security observations

  • High · Outdated Python 2 Support — README.md, setup.py, sublist3r.py. The codebase explicitly supports Python 2.7.x, which reached end-of-life on January 1, 2020. Python 2 no longer receives security updates, making it vulnerable to known exploits. Dependencies like dnspython and requests may have deprecated Python 2 support in newer versions. Fix: Remove Python 2 support entirely. Migrate to Python 3.8+ which has long-term support. Update all dependencies to versions that no longer support Python 2.
  • High · Unspecified Dependency Versions — requirements.txt. The requirements.txt file likely contains dependencies without pinned versions (argparse, dnspython, requests). This allows installation of potentially vulnerable versions when dependencies update. Without version constraints, the tool could inadvertently use outdated packages with known CVEs. Fix: Pin all dependencies to specific versions (e.g., dnspython==2.3.0, requests==2.31.0). Regularly audit and update dependencies using tools like safety or pip-audit. Implement a dependency management strategy.
  • High · External HTTP Requests Without Validation — sublist3r.py. The tool makes HTTP requests to multiple external services (Google, Yahoo, Bing, Baidu, Netcraft, Virustotal, ThreatCrowd, DNSdumpster, ReverseDNS) using the requests library. Without proper SSL/TLS certificate validation, request timeouts, or response validation, this could expose the tool to MITM attacks, DNS poisoning, or injection attacks. Fix: Implement proper SSL/TLS certificate validation, set reasonable request timeouts, validate and sanitize all HTTP responses before processing. Use verify=True in requests calls. Implement error handling for malformed responses.
  • Medium · DNS Injection Risk via User Input — sublist3r.py, subbrute/subbrute.py. The tool processes DNS queries and domain names from user input. If input validation is insufficient, attackers could inject malicious DNS queries or craft responses that cause the application to behave unexpectedly or process arbitrary data. Fix: Implement strict input validation for domain names using RFC-compliant domain name regex patterns. Validate all DNS responses. Use the dnspython library's built-in validation features.
  • Medium · Wordlist File Security — subbrute/names.txt, subbrute/resolvers.txt. The subbrute/names.txt and subbrute/resolvers.txt files are included in the repository. If these files are modified by an attacker, the tool's behavior could be compromised. Additionally, the tool may be vulnerable to path traversal if file paths are not properly validated. Fix: Validate all file paths to prevent directory traversal attacks. Consider using absolute paths or restricting file access to a specific directory. Implement file integrity checks (checksums/signatures) for critical data files.
  • Medium · No Rate Limiting or API Throttling — sublist3r.py. The tool makes requests to multiple public services and APIs (Google, Bing, etc.). Without rate limiting, the tool could be detected and blocked, or worse, used as a vector for abuse or DDoS attacks against target services. Fix: Implement rate limiting with exponential backoff. Add User-Agent rotation and request throttling. Respect robots.txt and terms of service for each service. Implement request delays between queries.
  • Low · Missing Security Headers in Documentation — README.md. The README and documentation do not mention security best practices for users, such as running the tool behind a proxy, ethical use guidelines, or warnings about legal implications of subdomain enumeration. Fix: Add a security and legal disclaimer section to the README. Document best practices for responsible use. Include warnings about legal implications and proper authorization requirements.
  • Low · No Input Sanitization Documentation — README.md, setup.py. There is no clear documentation about how user input is validated or sanitized, making it difficult for security auditors and users to understand the security posture of the application. Fix: Document all input validation mechanisms. Provide a SECURITY.md file explaining the security model and threat considerations. Add inline code comments for critical security functions.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Mixed signals · aboul3la/Sublist3r — RepoPilot