-
Notifications
You must be signed in to change notification settings - Fork 0
Home
NeetCode is a high‑performance Python framework for LeetCode‑style algorithms, built for developers who care about correctness, reproducibility, and real performance, not just collecting solutions.
This page describes the architecture, design principles, and usage patterns behind NeetCode, so you can understand how the system works and how to extend it safely.
- What is NeetCode?
- Who is NeetCode for?
- Core Design Principles
- System Architecture Overview
- Execution Flow
- How to Use NeetCode Day‑to‑Day
- How to Add New Problems & Solutions
- Testing, Quality, and the .dev Suite
- Internationalization (EN / Traditional Chinese)
- FAQ – Architecture and Usage
- Roadmap and Contribution Guidelines
NeetCode is a structured execution environment for algorithms:
- Not just a repository of answers.
- Yes: a full test runner, benchmark engine, and complexity estimator for LeetCode‑style problems.
At its core, NeetCode provides:
- Reproducible random tests with explicit seeds.
- Custom judge functions for flexible correctness checking.
- Multi‑solution benchmarking for comparing algorithmic approaches.
- VS Code + CLI integration for a smooth development and debugging workflow.
- Competitive programmers who want to stress‑test and benchmark solutions before contests.
- Software engineers preparing interviews, who want realistic input/output and fast feedback loops.
- Students and educators who need a repeatable environment for teaching algorithms.
- Algorithm / tooling engineers who care about architecture, maintainability, and testability.
-
Separation of concerns
- Solutions: implement algorithm logic and input parsing.
- Runner: orchestrates execution, comparison, reporting, and complexity analysis.
- Generators: own data generation and edge‑case coverage.
-
Reproducibility first
- Every important non‑deterministic behavior (e.g., random tests) is driven by explicit seeds and clear logging.
-
Stable public contracts
-
solve(),SOLUTIONS, generator APIs, andJUDGE_FUNCform the public API surface for users and maintainers.
-
-
Characterization before refactoring
- The
.dev/testssuite documents current behavior; every refactor must keep these guarantees unless intentionally changed.
- The
-
Tooling‑centric developer experience
- VS Code tasks, launch configurations, and scripts are treated as first‑class UX, not afterthoughts.
- Responsibility: implement problem‑specific algorithms.
-
Key elements:
-
Solutionclass with LeetCode‑style methods. -
solve()function that reads from stdin and prints to stdout. - Optional
SOLUTIONSdictionary describing multiple methods for benchmarking.
-
-
tests/- Canonical
.in/.outfiles defining baseline correctness. - Stable, human‑readable cases for debugging and review.
- Canonical
-
generators/- Optional modules that generate large or randomized inputs.
- Can also expose
generate_for_complexity(n)for time‑complexity estimation.
-
test_runner.py– Command‑line entry point and orchestration. -
executor.py– Runs single or multiple test cases and collects results. -
compare.py– Normalizes and compares outputs (exact,sorted,set,JUDGE_FUNC). -
reporter.py– Renders human‑friendly test and benchmark reports. -
module_loader.py– Safely loads solution and generator modules. -
complexity_estimator.py– Estimates empirical time complexity via sampled runs. -
case_runner.py– Focused single‑case runner for debugging.
-
.vscode/– Curated tasks and launch configurations for:- Running tests for the current problem.
- Debugging specific cases.
- Benchmarking solutions with a single key press.
-
.dev/- Home of the maintainer test suite (150+ cases).
- Documentation for test strategy and architecture.
- The place where we protect the public behavior of the framework.
From an architect’s point of view, a NeetCode run looks like this:
-
Discovery
- The runner resolves the problem name to:
- A
solutions/*.pymodule. - Optionally, a
generators/*.pymodule.
- A
- The runner resolves the problem name to:
-
Configuration
- CLI flags or VS Code tasks are parsed into an internal configuration:
- Which method(s) to run (
SOLUTIONSkey). - How many generated cases.
- Whether to benchmark or estimate complexity.
- Which method(s) to run (
- CLI flags or VS Code tasks are parsed into an internal configuration:
-
Input Resolution
- Static
.infiles are loaded fromtests/. - Generated inputs (if enabled) are appended or replace static cases.
- Static
-
Execution
- The framework invokes
solve()(or wrapper functions) under controlled conditions. - Runtime, output, and metadata are collected for each case.
- The framework invokes
-
Validation
- Outputs are normalized (lists, sets, strings, floats, etc.).
- Comparison is applied in priority order:
-
JUDGE_FUNC(custom logic) -
COMPARE_MODE(sorted,set) - Exact match
-
-
Reporting
- Human‑readable results, failure diffs, timing tables, and optional complexity curves are printed.
Each step is deliberately isolated, making the system observable, testable, and refactorable.
-
Solve a problem
- Implement your solution in
solutions/{id}_{name}.py.
- Implement your solution in
-
Add or update tests
- Add
.in/.outpairs undertests/.
- Add
-
Run tests
- Use the platform scripts (
run_tests.bat/run_tests.sh) or VS Code tasks.
- Use the platform scripts (
-
Debug a specific case
- Use the single‑case runner (
run_casescripts) or a dedicated F5 configuration.
- Use the single‑case runner (
-
Benchmark multiple solutions
- Define
SOLUTIONSand run the runner with--alland--benchmark.
- Define
From a contributor’s perspective, this workflow is designed to be predictable, repeatable, and fast.
-
Generate a template
- Use
new_problemscripts to scaffold a solution file and base tests.
- Use
-
Implement the algorithm
- Fill in the
Solutionmethod andsolve()input parsing.
- Fill in the
-
Add tests
- Create
.in/.outfiles that match your input/output format.
- Create
- Use
SOLUTIONSto describe multiple named approaches:- Example keys:
default,heap,divide,greedy.
- Example keys:
- When you need multiple classes or signatures:
- Introduce thin wrapper functions (e.g.,
solve_recursive,solve_iterative) and map them inSOLUTIONS.
- Introduce thin wrapper functions (e.g.,
- This pattern keeps:
- LeetCode‑style method names in classes.
- Stable, simple entry points for the runner.
The .dev directory is the quality gate for the project.
- Unit tests lock in behavior of each runner module.
- Edge‑case suites defend against regressions around tricky inputs.
- Integration tests simulate real user workflows through the CLI.
As an architectural rule:
If a behavior matters to users, it must be captured in
.dev/tests.
Any structural change to the runner or IO flow should be paired with corresponding updates in the .dev test suite.
NeetCode is documented in both:
-
English (
README.md) – primary reference for global audience and search engines. -
Traditional Chinese (
README_zh-TW.md) – faithful translation with locally adapted explanations.
For SEO / GEO:
- English remains the canonical source of truth.
- The Traditional Chinese version ensures the project is discoverable and approachable for Chinese‑speaking developers without diverging architecturally.
NeetCode turns ad‑hoc scripts into a repeatable, measurable experiment platform.
It standardizes how you run, validate, compare, and stress‑test algorithmic solutions.
Unstructured scripts make it hard to:
- Compare multiple implementations fairly.
- Re‑run the exact same random tests.
- Share a consistent setup across a team or a class.
NeetCode encodes these concerns into a clear, documented architecture.
The following contracts are treated as stable surface area:
-
solve()entry point. -
SOLUTIONSstructure. -
JUDGE_FUNCandCOMPARE_MODE. - Generator APIs (
generate, optionallygenerate_for_complexity).
Internal modules may evolve, but they are guarded by .dev/tests to preserve observable behavior.
-
Short‑term
- Refine diagnostics for failed cases and benchmark summaries.
- Document more solution and generator patterns in the wiki.
-
Mid‑term
- Add pluggable reporters (JSON, HTML, CI‑friendly formats).
- Expand complexity estimation and performance profiling features.
-
Long‑term
- Position NeetCode as a reusable algorithm lab runtime for courses, teams, and tooling ecosystems.
If you plan to contribute:
-
Start by reading
README.mdand.dev/README.md. - Follow the existing patterns for solutions, generators, and tests.
-
Maintain or extend the
.dev/testssuite when you change behavior.