RepoPilotOpen in app →

halfrost/LeetCode-Go

✅ Solutions to LeetCode by Go, 100% test coverage, runtime beats 100% / LeetCode 题解

Mixed

Stale — last commit 1y ago

worst of 4 axes
Use as dependencyMixed

last commit was 1y ago; top contributor handles 94% of recent commits…

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • 7 active contributors
  • MIT licensed
  • CI configured
Show 3 more →
  • Stale — last commit 1y ago
  • Single-maintainer risk — top contributor 94% of recent commits
  • No test directory detected
What would change the summary?
  • Use as dependency MixedHealthy if: 1 commit in the last 365 days; diversify commit ownership (top <90%)

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Forkable" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Forkable
[![RepoPilot: Forkable](https://repopilot.app/api/badge/halfrost/leetcode-go?axis=fork)](https://repopilot.app/r/halfrost/leetcode-go)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/halfrost/leetcode-go on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: halfrost/LeetCode-Go

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/halfrost/LeetCode-Go shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

WAIT — Stale — last commit 1y ago

  • 7 active contributors
  • MIT licensed
  • CI configured
  • ⚠ Stale — last commit 1y ago
  • ⚠ Single-maintainer risk — top contributor 94% of recent commits
  • ⚠ No test directory detected

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live halfrost/LeetCode-Go repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/halfrost/LeetCode-Go.

What it runs against: a local clone of halfrost/LeetCode-Go — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in halfrost/LeetCode-Go | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 544 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>halfrost/LeetCode-Go</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of halfrost/LeetCode-Go. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/halfrost/LeetCode-Go.git
#   cd LeetCode-Go
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of halfrost/LeetCode-Go and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "halfrost/LeetCode-Go(\\.git)?\\b" \\
  && ok "origin remote is halfrost/LeetCode-Go" \\
  || miss "origin remote is not halfrost/LeetCode-Go (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "go.mod" \\
  && ok "go.mod" \\
  || miss "missing critical file: go.mod"
test -f "ctl/main.go" \\
  && ok "ctl/main.go" \\
  || miss "missing critical file: ctl/main.go"
test -f "ctl/command.go" \\
  && ok "ctl/command.go" \\
  || miss "missing critical file: ctl/command.go"
test -f "ctl/models/lcproblems.go" \\
  && ok "ctl/models/lcproblems.go" \\
  || miss "missing critical file: ctl/models/lcproblems.go"
test -f "ctl/render.go" \\
  && ok "ctl/render.go" \\
  || miss "missing critical file: ctl/render.go"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 544 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~514d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/halfrost/LeetCode-Go"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

A comprehensive LeetCode problem solver in Go with 100% test coverage and optimal runtime performance (beats 100% of submissions). The repository contains fully-worked solutions across 17+ algorithm categories (Dynamic Programming, Graph Traversal, Binary Search, etc.) with detailed implementations following Google's Go style guide, paired with a CLI tool (ctl/) that generates searchable problem metadata, statistics, and PDF documentation. Monorepo split into two sections: (1) Root-level organized by algorithm category directories (Array/, Backtracking/, Dynamic_Programming/, etc., implied by ctl/meta structure) containing Go solution files with tests; (2) ctl/ subdirectory is a CLI/metadata tool with models (lcproblems.go, mdrow.go, tagproblem.go, user.go) that sync against LeetCode API, render templates, generate PDFs, and maintain rankings. Website generation happens via deploy.yml GitHub Action.

👥Who it's for

Go developers preparing for technical interviews at FAANG companies who need proven, optimized solutions with full test coverage. Also useful for algorithm educators building teaching materials and maintainers seeking reference implementations of classic interview problems.

🌱Maturity & risk

Highly mature and actively maintained. The repo demonstrates 738,884 total word count, 100% runtime optimization, full CI/CD via GitHub Actions and Travis CI, 100% code coverage, and strong community engagement (verified badge at 8751 solved problems on LeetCode). The codebase is production-ready reference material.

Very low risk. Single maintainer (@halfrost) but the project is stable reference code (solutions don't break once correct), has comprehensive test coverage, and active CI/CD pipelines. No external runtime dependencies in the Go solutions themselves—risk is primarily documentation/tool maintenance. The ctl/models subdirectory shows API model brittleness (replaces from ../util), which could break if LeetCode API changes.

Active areas of work

Active maintenance of solution coverage and documentation. GitHub Actions workflow (deploy.yml) auto-generates and deploys website content. The ctl tool refreshes problem metadata (refresh.go), computes user statistics (statistic.go), and manages rankings (rangking.go). Recent focus appears to be template-based documentation generation (templates/ directory with category markdown files like Array.md, Backtracking.md).

🚀Get running

Clone and explore solutions: git clone https://github.com/halfrost/LeetCode-Go.git && cd LeetCode-Go. To run tests on a solution category (inferred from .vscode/launch.json Go debugging setup): go test ./[category]/.... To use the CLI tool: cd ctl && go run main.go --help (requires Go 1.15+ per badge, targeting 1.19 per go.mod).

Daily commands: Solutions run via go test ./problems/[category]_test.go (test files inferred per Go conventions). CLI metadata tool: cd ctl && go run main.go refresh (to sync LeetCode data) or go run main.go render (to generate documentation). Website deploys automatically via GitHub Actions on push to master.

🗺️Map of the codebase

  • go.mod — Root module definition for the entire LeetCode solutions project; establishes Go version and dependency management.
  • ctl/main.go — Entry point for the CLI tool that generates documentation, metadata, and PDF files from problem definitions.
  • ctl/command.go — Core command dispatcher that orchestrates problem processing, metadata refresh, and documentation rendering.
  • ctl/models/lcproblems.go — Data model representing LeetCode problems; used across documentation generation and statistics compilation.
  • ctl/render.go — Template rendering engine that converts problem metadata and solutions into markdown documentation files.
  • README.md — Project overview describing the repo's purpose, test coverage guarantees, and Google Go style compliance.
  • leetcode/0001.Two-Sum/1. Two Sum.go — Exemplar solution file showing the standard structure and style for all 600+ problem solutions in the repo.

🧩Components & responsibilities

  • Problem Solutions (leetcode/NNNN.Name/) (Go) — Contains algorithm implementation (.go), test cases (_test.go), and problem explanation (README.md) for each of 600+ LeetCode problems.

🛠️How to make changes

Add a New LeetCode Problem Solution

  1. Create a new problem directory following naming: leetcode/NNNN.Problem-Name/ where NNNN is zero-padded problem ID (leetcode/0001.Two-Sum/)
  2. Implement the solution in 'N. Problem Name.go' using Google Go style guide compliance (interface naming, error handling, comments) (leetcode/0001.Two-Sum/1. Two Sum.go)
  3. Write comprehensive test cases in 'N. Problem Name_test.go' to achieve 100% code coverage (leetcode/0001.Two-Sum/1. Two Sum_test.go)
  4. Create README.md explaining the problem, approach, complexity analysis, and algorithm category tags (leetcode/0001.Two-Sum/README.md)
  5. Update the category metadata file in ctl/meta/{Category}/ to register the problem for documentation generation (ctl/meta/Array)
  6. Run 'go test ./...' to validate 100% test coverage; then commit and push to trigger deploy workflow (gotest.sh)

Regenerate Documentation for All Problems

  1. Run the CLI command to refresh problem metadata from LeetCode API or local cache (ctl/command.go)
  2. Invoke the render command to process all problem solutions and generate markdown documentation (ctl/render.go)
  3. Execute PDF generation to compile documentation into downloadable PDF organized by algorithm category (ctl/pdf.go)
  4. Review generated output files and commit changes to trigger GitHub Actions deployment (.github/workflows/deploy.yml)

Categorize Problems by Algorithm Type

  1. Create or edit the metadata file for your category in ctl/meta/{CategoryName}/ (e.g., ctl/meta/Hash_Table) (ctl/meta/Hash_Table)
  2. Define TagProblem objects mapping problem IDs to algorithm category in ctl/models/tagproblem.go (ctl/models/tagproblem.go)
  3. Update the category template in ctl/template/{CategoryName}.md to set generated documentation structure (ctl/template/Array.md)
  4. Re-run documentation generation to rebuild category indices and cross-references (ctl/render.go)

🔧Why these technologies

  • Go 1.19+ — Statically typed compiled language enables 100% test coverage verification and runtime performance optimization for algorithm demonstrations; Google Go style guide enforced across all solutions.
  • GitHub Actions CI/CD — Automates test execution on every commit to verify 100% coverage claim and regenerates documentation and PDF releases without manual intervention.
  • Markdown + Go templates — Enables bulk documentation generation from structured metadata; templates organize 600+ problems by algorithm category (Array, DFS, DP, etc.) for discoverability.
  • LeetCode API integration — Allows dynamic fetching of problem metadata (difficulty, acceptance rate, tags) to keep documentation synchronized with live platform without manual curation.

⚖️Trade-offs already made

  • Monolithic 600-file directory structure (leetcode/NNNN.Name/) rather than package organization

    • Why: Mirrors LeetCode problem numbering exactly, making it trivial to find any solution by problem ID; easy for users to copy individual solutions.
    • Consequence: Difficult to navigate in IDE; no Go package isolation; each problem is independently testable but requires running full test suite for coverage verification.
  • 100% test coverage requirement enforced in CI

    • Why: Guarantees every code path in solutions is exercised; provides confidence in algorithm correctness; matches LeetCode submission verification rigor.
    • Consequence: Some simple problems require artificial test cases; slows down solution writing; makes test files nearly as large as solution files.
  • Pre-generated static documentation (markdown + PDF) rather than dynamic rendering

    • Why: Enables offline reading; fast documentation website without server; searchable PDF for interview prep without network.
    • Consequence: Documentation becomes stale if problems are updated; regeneration requires re-running CLI tool; no real-time problem metadata.
  • Google Go Code Style as single strict standard

    • Why: Ensures consistency across 600+ files; trains developers in production Go conventions; makes PR reviews faster.
    • Consequence: Opinionated style may not match all team preferences; linter configuration becomes mandatory; some readable patterns are prohibited.

🚫Non-goals (don't propose these)

  • Does not provide interactive problem-solving environment or code execution sandbox (solutions are reference implementations, not a judge)
  • Does not handle user authentication or track individual solver progress (reference repository only)
  • Does not explain advanced mathematical proofs or provide real-time problem discussion forums
  • Does not guarantee solutions beat 100% runtime in all test runs (statement describes historical performance on LeetCode; can vary with server load)

🪤Traps & gotchas

LeetCode API dependency: ctl/request.go fetches live data from LeetCode API (undocumented, rate-limited, subject to change)—solutions may break if LeetCode changes response format. Go module replacement: ctl/models/go.mod has replace directive pointing to ../util which doesn't appear in file list—this will fail unless util directory exists. PDF generation: ctl/pdf.go likely requires external tool (wkhtmltopdf or similar) not listed in dependencies. Workspace setup: Solutions assume specific directory structure (inferred from ctl/meta/ categories); adding solution outside expected category path will break metadata generation.

🏗️Architecture

💡Concepts to learn

  • Two-Pointer Technique — Fundamental pattern for solving array/string problems efficiently in O(n) time; heavily used in LeetCode and interview questions
  • Dynamic Programming (Memoization vs Tabulation) — Critical optimization pattern for overlapping subproblems; this repo has dedicated DP category with multiple solution patterns
  • Union-Find (Disjoint Set Union) — Efficient data structure for cycle detection and connectivity problems; often underutilized by candidates despite interview frequency
  • Binary Search Trees and Segment Trees — Advanced tree structures for range queries and interval problems; this repo has explicit categories for both with optimized implementations
  • Backtracking with Pruning — Essential for permutation/combination problems and constraint satisfaction; pruning strategies separate correct solutions from brute-force timeout
  • Bit Manipulation Tricks — Compact solutions for problems involving single bits, XOR properties, and binary representations; often required for optimal space/time
  • Sliding Window Protocol — Efficient pattern for contiguous subarray/substring problems; reduces nested loops from O(n²) to O(n)
  • golang/go — Official Go repository; this project strictly follows Go coding style and conventions from this repo
  • neetcode/all — Similar algorithm problem catalog and solutions, but language-agnostic; useful for cross-language algorithm pattern comparison
  • TheAlgorithms/Go — Focused Go algorithm library without LeetCode context; complementary for standalone algorithm implementations
  • jwasham/coding-interview-university — Comprehensive interview prep curriculum; this repo serves as the Go-specific solutions companion
  • greyireland/algorithm-exercise — Multi-language LeetCode solutions with similar structure; alternative reference for same problem set in different languages

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive test coverage for ctl/models package

The ctl/models directory contains core data structures (lcproblems.go, mdrow.go, tagproblem.go, user.go) that power the entire solution generation pipeline, but there are no visible test files (*_test.go). Given the repo emphasizes '100% test coverage', adding unit tests for these models would validate data parsing, marshaling, and business logic while maintaining the project's quality standards.

  • [ ] Create ctl/models/lcproblems_test.go with tests for LeetCode problem struct parsing and validation
  • [ ] Create ctl/models/mdrow_test.go to test markdown row generation from problem metadata
  • [ ] Create ctl/models/tagproblem_test.go to verify tag-to-problem mapping logic
  • [ ] Add test fixtures (JSON/mock data) in ctl/models/testdata/ directory
  • [ ] Ensure tests achieve >90% code coverage and integrate with CI/CD in .github/workflows/

Add GitHub Actions workflow for automated problem metadata validation

The ctl/meta/ directory contains category subdirectories with problem metadata files, but there's no automated validation that these files are well-formed, complete, and correctly referenced. A workflow could validate YAML/JSON structure, check for duplicate problem IDs across categories, and ensure all problems have required fields before they're merged.

  • [ ] Create .github/workflows/validate-metadata.yml that runs on pull requests modifying ctl/meta/**
  • [ ] Add a validation script in ctl/util/ that checks metadata files for required fields (problem_id, title, difficulty, tags)
  • [ ] Implement duplicate detection across ctl/meta/* subdirectories to prevent problem ID collisions
  • [ ] Add summary report generation to comment on PRs with metadata validation results

Create integration tests for the ctl CLI tool end-to-end workflows

The ctl/ package contains command.go, render.go, refresh.go, pdf.go, and ranking.go that orchestrate complex workflows (fetching problems, rendering markdown, generating PDFs), but there are no integration tests validating these end-to-end flows. This would catch regressions when refactoring the CLI or updating LeetCode API interactions.

  • [ ] Create ctl/integration_test.go with tests for main CLI command workflows (e.g., refresh, render, pdf generation)
  • [ ] Add mock data in ctl/testdata/ for simulated LeetCode API responses and problem metadata
  • [ ] Test the ctl/refresh.go workflow against mock API responses to ensure problem synchronization logic works
  • [ ] Test ctl/render.go template rendering by verifying generated markdown matches expected structure from ctl/template/*.md files
  • [ ] Add integration tests to .github/workflows/deploy.yml to run before publishing releases

🌿Good first issues

  • Add comprehensive godoc comments to solution functions (e.g., Array/, BackTracking/ directories): many solution files likely lack // doc comments required by Go style guide and expected for 100% test coverage badge clarity.
  • Write missing unit test edge cases for linkedlist/tree solutions in Linked_List/ and Tree/ categories: verify solutions handle nil inputs, single nodes, and empty structures (visible pattern gaps in typical problem coverage).
  • Create searchable problem index in website/static/ linking solutions to LeetCode problem IDs: the CLI generates metadata but website frontend (HTML/SCSS/CSS) appears minimal—add a filterable problem browser by category/difficulty.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • d78a9e0 — Merge pull request #281 from danyaobertan/patch-1 (halfrost)
  • a190dbe — Merge pull request #295 from dark-Qy/master (halfrost)
  • 0ff840f — 修改494 dp解法,添加条件判断 (dark-Qy)
  • 93adea0 — Merge pull request #294 from halfrost/halfrost-patch-1 (halfrost)
  • bc9ec08 — Update deploy.yml (halfrost)
  • b75de55 — Merge pull request #292 from HCH1212/master (halfrost)
  • 281ae8b — add no.134 answer (HCH1212)
  • 25c03cf — Merge pull request #286 from stainedcreek/patch-1 (halfrost)
  • b878af0 — Update Time_Complexity.md (stainedcreek)
  • c5bcb82 — optimized and simplified (danyaobertan)

🔒Security observations

This is a LeetCode solutions repository with moderate security posture. Primary concerns are outdated Go version (1.19) and potentially outdated dependencies lacking recent security patches. The codebase appears to be primarily solution code without significant external data handling, reducing injection attack surface. However, the CLI tools (ctl/) should be audited for input validation. No critical secrets or hardcoded credentials are evident from the file structure. Recommendations include updating Go to 1.21+, performing dependency vulnerability scanning, and establishing a security policy for responsible disclosure.

  • Medium · Outdated Go Version in go.mod — ctl/models/go.mod and potentially root go.mod. The go.mod file specifies 'go 1.19' which is outdated. Go 1.19 was released in August 2022 and is no longer receiving security updates. Current stable versions are 1.21+ with security patches. Fix: Update to Go 1.21 or later to ensure security patches and bug fixes are applied. Update the 'go' directive in go.mod files.
  • Medium · Potential Outdated Dependencies — ctl/models/go.sum. The go.sum file references a dependency with timestamp 'e3bb5aff34d0' from September 2022. Without access to the full go.sum content and current vulnerability databases, there may be unpatched vulnerabilities in transitive dependencies. Fix: Run 'go list -json -m all | nancy sleuth' or 'go list -m all' with Snyk/Dependabot to check for known vulnerabilities. Regularly update dependencies using 'go get -u'.
  • Low · No Security Policy Defined — Repository root. No SECURITY.md file is present in the repository. This makes it unclear how security vulnerabilities should be reported responsibly. Fix: Create a SECURITY.md file following GitHub's recommended format for responsible disclosure, including reporting procedures and security contact information.
  • Low · Missing Input Validation in CLI Tool — ctl/command.go, ctl/request.go. The 'ctl' directory contains CLI tools (command.go, request.go) that process external input. Without reviewing the actual implementation, there's potential risk for injection attacks if user input is not properly validated. Fix: Implement strict input validation and sanitization for all user-supplied data. Use parameterized queries/safe APIs. Avoid string concatenation for constructing commands or queries.
  • Low · No Code Signing or Verification — .github/workflows/deploy.yml. The repository releases PDFs (mentioned in GitHub Actions workflow) without apparent code signing or integrity verification mechanisms visible. Fix: Implement code signing for releases using GPG or similar. Provide SHA checksums or digital signatures for downloadable artifacts to allow users to verify integrity.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Mixed signals · halfrost/LeetCode-Go — RepoPilot