Skip to content
lufftw edited this page Dec 8, 2025 · 2 revisions

NeetCode Architecture & Guide – High‑Performance Python LeetCode Framework

NeetCode is a high‑performance Python framework for LeetCode‑style algorithms, built for developers who care about correctness, reproducibility, and real performance, not just collecting solutions.

This page describes the architecture, design principles, and usage patterns behind NeetCode, so you can understand how the system works and how to extend it safely.


Table of Contents


What is NeetCode?

NeetCode is a structured execution environment for algorithms:

  • Not just a repository of answers.
  • Yes: a full test runner, benchmark engine, and complexity estimator for LeetCode‑style problems.

At its core, NeetCode provides:

  • Reproducible random tests with explicit seeds.
  • Custom judge functions for flexible correctness checking.
  • Multi‑solution benchmarking for comparing algorithmic approaches.
  • VS Code + CLI integration for a smooth development and debugging workflow.

Who is NeetCode for?

  • Competitive programmers who want to stress‑test and benchmark solutions before contests.
  • Software engineers preparing interviews, who want realistic input/output and fast feedback loops.
  • Students and educators who need a repeatable environment for teaching algorithms.
  • Algorithm / tooling engineers who care about architecture, maintainability, and testability.

Core Design Principles

  • Separation of concerns
    • Solutions: implement algorithm logic and input parsing.
    • Runner: orchestrates execution, comparison, reporting, and complexity analysis.
    • Generators: own data generation and edge‑case coverage.
  • Reproducibility first
    • Every important non‑deterministic behavior (e.g., random tests) is driven by explicit seeds and clear logging.
  • Stable public contracts
    • solve(), SOLUTIONS, generator APIs, and JUDGE_FUNC form the public API surface for users and maintainers.
  • Characterization before refactoring
    • The .dev/tests suite documents current behavior; every refactor must keep these guarantees unless intentionally changed.
  • Tooling‑centric developer experience
    • VS Code tasks, launch configurations, and scripts are treated as first‑class UX, not afterthoughts.

System Architecture Overview

Solutions Layer (solutions/)

  • Responsibility: implement problem‑specific algorithms.
  • Key elements:
    • Solution class with LeetCode‑style methods.
    • solve() function that reads from stdin and prints to stdout.
    • Optional SOLUTIONS dictionary describing multiple methods for benchmarking.

Test & Generator Layer (tests/, generators/)

  • tests/
    • Canonical .in / .out files defining baseline correctness.
    • Stable, human‑readable cases for debugging and review.
  • generators/
    • Optional modules that generate large or randomized inputs.
    • Can also expose generate_for_complexity(n) for time‑complexity estimation.

Runner Layer (runner/)

  • test_runner.py – Command‑line entry point and orchestration.
  • executor.py – Runs single or multiple test cases and collects results.
  • compare.py – Normalizes and compares outputs (exact, sorted, set, JUDGE_FUNC).
  • reporter.py – Renders human‑friendly test and benchmark reports.
  • module_loader.py – Safely loads solution and generator modules.
  • complexity_estimator.py – Estimates empirical time complexity via sampled runs.
  • case_runner.py – Focused single‑case runner for debugging.

Tooling and Maintenance

  • .vscode/ – Curated tasks and launch configurations for:
    • Running tests for the current problem.
    • Debugging specific cases.
    • Benchmarking solutions with a single key press.
  • .dev/
    • Home of the maintainer test suite (150+ cases).
    • Documentation for test strategy and architecture.
    • The place where we protect the public behavior of the framework.

Execution Flow

From an architect’s point of view, a NeetCode run looks like this:

  1. Discovery
    • The runner resolves the problem name to:
      • A solutions/*.py module.
      • Optionally, a generators/*.py module.
  2. Configuration
    • CLI flags or VS Code tasks are parsed into an internal configuration:
      • Which method(s) to run (SOLUTIONS key).
      • How many generated cases.
      • Whether to benchmark or estimate complexity.
  3. Input Resolution
    • Static .in files are loaded from tests/.
    • Generated inputs (if enabled) are appended or replace static cases.
  4. Execution
    • The framework invokes solve() (or wrapper functions) under controlled conditions.
    • Runtime, output, and metadata are collected for each case.
  5. Validation
    • Outputs are normalized (lists, sets, strings, floats, etc.).
    • Comparison is applied in priority order:
      1. JUDGE_FUNC (custom logic)
      2. COMPARE_MODE (sorted, set)
      3. Exact match
  6. Reporting
    • Human‑readable results, failure diffs, timing tables, and optional complexity curves are printed.

Each step is deliberately isolated, making the system observable, testable, and refactorable.


How to Use NeetCode Day‑to‑Day

  • Solve a problem
    • Implement your solution in solutions/{id}_{name}.py.
  • Add or update tests
    • Add .in / .out pairs under tests/.
  • Run tests
    • Use the platform scripts (run_tests.bat / run_tests.sh) or VS Code tasks.
  • Debug a specific case
    • Use the single‑case runner (run_case scripts) or a dedicated F5 configuration.
  • Benchmark multiple solutions
    • Define SOLUTIONS and run the runner with --all and --benchmark.

From a contributor’s perspective, this workflow is designed to be predictable, repeatable, and fast.


How to Add New Problems & Solutions

Single‑solution pattern

  1. Generate a template
    • Use new_problem scripts to scaffold a solution file and base tests.
  2. Implement the algorithm
    • Fill in the Solution method and solve() input parsing.
  3. Add tests
    • Create .in / .out files that match your input/output format.

Multi‑solution and wrapper patterns

  • Use SOLUTIONS to describe multiple named approaches:
    • Example keys: default, heap, divide, greedy.
  • When you need multiple classes or signatures:
    • Introduce thin wrapper functions (e.g., solve_recursive, solve_iterative) and map them in SOLUTIONS.
  • This pattern keeps:
    • LeetCode‑style method names in classes.
    • Stable, simple entry points for the runner.

Testing, Quality, and the .dev Suite

The .dev directory is the quality gate for the project.

  • Unit tests lock in behavior of each runner module.
  • Edge‑case suites defend against regressions around tricky inputs.
  • Integration tests simulate real user workflows through the CLI.

As an architectural rule:

If a behavior matters to users, it must be captured in .dev/tests.

Any structural change to the runner or IO flow should be paired with corresponding updates in the .dev test suite.


Internationalization (EN / Traditional Chinese)

NeetCode is documented in both:

  • English (README.md) – primary reference for global audience and search engines.
  • Traditional Chinese (README_zh-TW.md) – faithful translation with locally adapted explanations.

For SEO / GEO:

  • English remains the canonical source of truth.
  • The Traditional Chinese version ensures the project is discoverable and approachable for Chinese‑speaking developers without diverging architecturally.

FAQ – Architecture and Usage

What problem does this framework solve beyond “just solving LeetCode”?

NeetCode turns ad‑hoc scripts into a repeatable, measurable experiment platform.
It standardizes how you run, validate, compare, and stress‑test algorithmic solutions.

Why not just use raw Python scripts?

Unstructured scripts make it hard to:

  • Compare multiple implementations fairly.
  • Re‑run the exact same random tests.
  • Share a consistent setup across a team or a class.

NeetCode encodes these concerns into a clear, documented architecture.

How stable are the public APIs?

The following contracts are treated as stable surface area:

  • solve() entry point.
  • SOLUTIONS structure.
  • JUDGE_FUNC and COMPARE_MODE.
  • Generator APIs (generate, optionally generate_for_complexity).

Internal modules may evolve, but they are guarded by .dev/tests to preserve observable behavior.


Roadmap and Contribution Guidelines

  • Short‑term
    • Refine diagnostics for failed cases and benchmark summaries.
    • Document more solution and generator patterns in the wiki.
  • Mid‑term
    • Add pluggable reporters (JSON, HTML, CI‑friendly formats).
    • Expand complexity estimation and performance profiling features.
  • Long‑term
    • Position NeetCode as a reusable algorithm lab runtime for courses, teams, and tooling ecosystems.

If you plan to contribute:

  • Start by reading README.md and .dev/README.md.
  • Follow the existing patterns for solutions, generators, and tests.
  • Maintain or extend the .dev/tests suite when you change behavior.