Skip to content

Commit 16c6e10

Browse files
authored
feat: update new cookiecutter template (#33)
1 parent 57b6cd7 commit 16c6e10

File tree

383 files changed

+8109
-6792
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

383 files changed

+8109
-6792
lines changed

.amazonq/rules/problem-creation.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -135,6 +135,7 @@ When creating JSON properties that use PascalCase (solution_class_name, test_cla
135135
- Multiple methods including `__init__`
136136
- Complex test setup with operation sequences
137137
- Import custom class in test_imports
138+
- **NEVER include custom solution classes** in test_imports - only import the main solution class specified in solution_class_name
138139

139140
### Dict-based Tree Problems (Trie, etc.)
140141

Lines changed: 129 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,129 @@
1+
# Test Case Enhancement Rules
2+
3+
## Assistant Workflow for Adding Comprehensive Test Cases
4+
5+
When user requests to enhance test cases for a problem, the assistant will:
6+
7+
### 1. Problem Resolution (Priority Order)
8+
9+
- **FIRST**: Try to resolve from context - check active file path or user-provided problem name
10+
- **SECOND**: If context resolution fails, THEN run `poetry run python .templates/check_test_cases.py --threshold=10 --max=1` to auto-detect 1 problem with <10 test cases
11+
- **LAST**: If both above fail, ask user to explicitly specify problem name
12+
13+
### 2. Test Case Generation
14+
15+
- Read `leetcode/{problem_name}/README.md` for problem understanding
16+
- Analyze existing test cases in `leetcode/{problem_name}/tests.py`
17+
- Generate comprehensive test cases covering:
18+
- **Edge cases**: Empty inputs, single elements, boundary values
19+
- **Corner cases**: Maximum/minimum constraints, special patterns
20+
- **Normal cases**: Typical scenarios with varied complexity
21+
- **Error cases**: Invalid inputs (if applicable)
22+
23+
### 3. Initial Validation
24+
25+
- Run `make p-test PROBLEM={problem_name}` to verify current implementation
26+
- **If errors found**:
27+
- DO NOT update implementation automatically
28+
- Only update test cases if they're incorrect
29+
- If implementation seems wrong, ASK USER first before modifying
30+
31+
### 4. JSON Template Update
32+
33+
- Update corresponding `.templates/leetcode/json/{problem_name}.json`
34+
- Add new test cases to `test_cases` field in proper format
35+
- Maintain existing test structure and naming conventions
36+
37+
### 5. Backup and Regeneration Process
38+
39+
- **Backup**: Move `leetcode/{problem_name}/` to `.cache/leetcode/{problem_name}/`
40+
- **Regenerate**: Run `make p-gen PROBLEM={problem_name} FORCE=1`
41+
- **Lint check**: Run `make p-lint PROBLEM={problem_name}`
42+
- **Iterate**: If lint fails, update JSON and regenerate until passes
43+
44+
### 6. Solution Preservation
45+
46+
- Copy `solution.py` from backup to newly generated structure
47+
- Run `make p-test PROBLEM={problem_name}` to verify tests pass
48+
- **If tests fail**: Go back to step 4, update JSON, and iterate until passes
49+
50+
### 7. Cleanup and Restore
51+
52+
- **CRITICAL**: Remove entire newly generated `leetcode/{problem_name}/` directory
53+
- **CRITICAL**: Restore original structure from `.cache/leetcode/{problem_name}/` backup
54+
- **CRITICAL**: Only THEN copy enhanced `test_solution.py` from generated files to restored structure
55+
- **CRITICAL**: Preserve existing solution class parametrization - if original test had multiple solution classes, restore them
56+
- Verify final state with `make p-test PROBLEM={problem_name}`
57+
- Clean up backup directory after successful verification
58+
59+
## Test Case Quality Standards
60+
61+
### Coverage Requirements
62+
63+
- **Minimum 10 test cases** per problem
64+
- **Edge cases**: 20-30% of total test cases
65+
- **Normal cases**: 50-60% of total test cases
66+
- **Corner cases**: 20-30% of total test cases
67+
68+
### Test Case Categories
69+
70+
#### Edge Cases
71+
72+
- Empty inputs: `[]`, `""`, `None`
73+
- Single element: `[1]`, `"a"`
74+
- Boundary values: `[0]`, `[1]`, `[-1]`
75+
- Maximum/minimum constraints from problem description
76+
77+
#### Corner Cases
78+
79+
- Duplicate elements: `[1,1,1]`
80+
- Sorted/reverse sorted arrays: `[1,2,3]`, `[3,2,1]`
81+
- All same elements: `[5,5,5,5]`
82+
- Alternating patterns: `[1,0,1,0]`
83+
84+
#### Normal Cases
85+
86+
- Mixed positive/negative numbers
87+
- Various array sizes within constraints
88+
- Different data patterns and structures
89+
- Representative problem scenarios
90+
91+
### JSON Format Requirements
92+
93+
- Use single quotes for Python strings in test cases
94+
- Follow existing parametrize format
95+
- Maintain type hints in parametrize_typed
96+
- Ensure test_cases string is valid Python list syntax
97+
- **NEVER include custom solution classes** in test_imports - only import the main solution class specified in solution_class_name
98+
- **PRESERVE existing solution class parametrization** - if original test had multiple solution classes, restore them after JSON regeneration
99+
100+
## Commands Reference
101+
102+
```bash
103+
# Find problems needing more test cases
104+
poetry run python .templates/check_test_cases.py --threshold=10 --max=1
105+
106+
# Test specific problem
107+
make p-test PROBLEM={problem_name}
108+
109+
# Generate from JSON template
110+
make p-gen PROBLEM={problem_name} FORCE=1
111+
112+
# Lint specific problem
113+
make p-lint PROBLEM={problem_name}
114+
```
115+
116+
## Error Handling
117+
118+
- **Implementation errors**: Ask user before modifying solution code
119+
- **Test failures**: Update JSON template and regenerate
120+
- **Lint failures**: Fix JSON format and iterate
121+
- **Backup failures**: Ensure `.cache/leetcode/` directory exists
122+
123+
## Success Criteria
124+
125+
- All tests pass with enhanced test cases
126+
- Minimum 10 comprehensive test cases per problem
127+
- Original solution code preserved and working
128+
- JSON template updated for future regeneration
129+
- Clean final state with no temporary files
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
name: ci
2+
3+
on:
4+
push:
5+
branches: [main]
6+
pull_request:
7+
types: [opened, synchronize, reopened]
8+
9+
jobs:
10+
test-reproducibility:
11+
runs-on: ubuntu-latest
12+
13+
steps:
14+
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
15+
16+
- name: Set up Python
17+
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
18+
with:
19+
python-version: "3.13"
20+
21+
- name: Install Poetry
22+
uses: snok/install-poetry@76e04a911780d5b312d89783f7b1cd627778900a # v1.4.1
23+
with:
24+
virtualenvs-create: true
25+
virtualenvs-in-project: true
26+
installer-parallel: true
27+
28+
- name: Load cached venv
29+
id: cached-poetry-dependencies
30+
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
31+
with:
32+
path: .venv
33+
key: venv-${{ runner.os }}-${{ steps.setup-python.outputs.python-version }}-${{ hashFiles('**/poetry.lock') }}
34+
35+
- name: Install dependencies
36+
run: poetry install --no-interaction --no-ansi
37+
38+
- name: Delete existing problems
39+
run: rm -rf leetcode/*/
40+
41+
- name: Regenerate all problems from templates
42+
run: make gen-all-problems FORCE=1
43+
env:
44+
# Skip interactive confirmation
45+
CI: true
46+
47+
- name: Run linting to verify reproducibility
48+
run: make lint

.github/workflows/ci-test.yml

Lines changed: 20 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -33,19 +33,33 @@ jobs:
3333
key: venv-${{ runner.os }}-${{ steps.setup-python.outputs.python-version }}-${{ hashFiles('**/poetry.lock') }}
3434

3535
- name: Install dependencies
36-
if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true'
3736
run: poetry install --no-interaction --no-ansi
3837

39-
- name: Cache Graphviz
38+
- name: Cache Graphviz installation
4039
id: cache-graphviz
4140
uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # v4.2.4
4241
with:
43-
path: /usr/bin/dot
44-
key: graphviz-${{ runner.os }}
42+
path: ~/graphviz-cache
43+
key: graphviz-installed-${{ runner.os }}
4544

4645
- name: Install Graphviz
47-
if: steps.cache-graphviz.outputs.cache-hit != 'true'
48-
run: sudo apt-get update && sudo apt-get install -y graphviz
46+
run: |
47+
if [ "${{ steps.cache-graphviz.outputs.cache-hit }}" = "true" ]; then
48+
sudo cp ~/graphviz-cache/bin/* /usr/bin/ 2>/dev/null || true
49+
sudo cp ~/graphviz-cache/lib/* /usr/lib/x86_64-linux-gnu/ 2>/dev/null || true
50+
sudo cp -r ~/graphviz-cache/share/graphviz /usr/share/ 2>/dev/null || true
51+
sudo cp -r ~/graphviz-cache/lib/graphviz /usr/lib/x86_64-linux-gnu/ 2>/dev/null || true
52+
sudo ldconfig
53+
sudo dot -c
54+
else
55+
sudo apt-get update
56+
sudo apt-get install -y graphviz
57+
mkdir -p ~/graphviz-cache/{bin,lib,share}
58+
cp /usr/bin/{dot,neato,twopi,circo,fdp,sfdp,patchwork,osage} ~/graphviz-cache/bin/ 2>/dev/null || true
59+
cp /usr/lib/x86_64-linux-gnu/lib{gvc,cgraph,cdt,pathplan,gvpr,lab-gamut,ann,gts}* ~/graphviz-cache/lib/ 2>/dev/null || true
60+
cp -r /usr/lib/x86_64-linux-gnu/graphviz ~/graphviz-cache/lib/ 2>/dev/null || true
61+
cp -r /usr/share/graphviz ~/graphviz-cache/share/ 2>/dev/null || true
62+
fi
4963
5064
- name: Run tests
5165
run: make test

.templates/check_test_cases.py

Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
#!/usr/bin/env python3
2+
3+
import json
4+
from pathlib import Path
5+
from typing import Optional
6+
import typer
7+
8+
9+
def count_test_cases(json_data):
10+
"""Count total test cases across all test methods."""
11+
total = 0
12+
13+
# Handle both direct test_methods and nested _test_methods.list
14+
test_methods = json_data.get("test_methods", [])
15+
if not test_methods and "_test_methods" in json_data:
16+
test_methods = json_data["_test_methods"].get("list", [])
17+
18+
for method in test_methods:
19+
test_cases = method.get("test_cases", "")
20+
if test_cases.strip():
21+
# Parse the test_cases string to count actual test cases
22+
try:
23+
# Remove outer brackets and split by top-level commas
24+
cases_str = test_cases.strip()
25+
if cases_str.startswith("[") and cases_str.endswith("]"):
26+
cases_str = cases_str[1:-1] # Remove outer brackets
27+
28+
# Count test cases by counting commas at parenthesis depth 0
29+
depth = 0
30+
case_count = 1 if cases_str.strip() else 0
31+
32+
for char in cases_str:
33+
if char in "([{":
34+
depth += 1
35+
elif char in ")]}":
36+
depth -= 1
37+
elif char == "," and depth == 0:
38+
case_count += 1
39+
40+
total += case_count
41+
except Exception:
42+
# Fallback to old method if parsing fails
43+
total += test_cases.count("(") - test_cases.count("([") + test_cases.count("[(")
44+
return total
45+
46+
47+
def main(
48+
threshold: int = typer.Option(
49+
10, "--threshold", "-t", help="Show files with test cases <= threshold"
50+
),
51+
max_results: str = typer.Option(
52+
1, "--max", "-m", help="Maximum number of results to show ('none' for no limit)"
53+
),
54+
):
55+
"""Check test case counts in LeetCode JSON templates."""
56+
json_dir = Path(".templates/leetcode/json")
57+
all_files = []
58+
59+
for json_file in json_dir.glob("*.json"):
60+
try:
61+
with open(json_file) as f:
62+
data = json.load(f)
63+
64+
test_count = count_test_cases(data)
65+
all_files.append((json_file.name, test_count))
66+
except Exception as e:
67+
typer.echo(f"Error reading {json_file.name}: {e}", err=True)
68+
69+
# Sort by test count
70+
all_files.sort(key=lambda x: x[1])
71+
72+
# Filter by threshold
73+
filtered_files = [f for f in all_files if f[1] <= threshold]
74+
75+
# Apply max results limit
76+
if max_results.lower() not in ["none", "null", "-1"]:
77+
try:
78+
max_count = int(max_results)
79+
if max_count > 0:
80+
filtered_files = filtered_files[:max_count]
81+
except ValueError:
82+
typer.echo(f"Invalid max_results value: {max_results}", err=True)
83+
raise typer.Exit(1)
84+
85+
typer.echo(f"Files with ≤{threshold} test cases ({len(filtered_files)} total):")
86+
for filename, count in filtered_files:
87+
typer.echo(f"{filename}: {count} test cases")
88+
89+
90+
if __name__ == "__main__":
91+
typer.run(main)

.templates/leetcode/cookiecutter.json

Lines changed: 24 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -18,20 +18,32 @@
1818
"readme_constraints": "- 2 <= nums.length <= 10^4\n- -10^9 <= nums[i] <= 10^9\n- -10^9 <= target <= 10^9\n- Only one valid answer exists.",
1919
"readme_additional": "",
2020

21+
"helpers_imports": "import pytest\nfrom leetcode_py.test_utils import logged_test\nfrom .solution import Solution",
22+
"helpers_content": "",
23+
"helpers_run_name": "two_sum",
24+
"helpers_run_signature": "(solution_class: type, nums: list[int], target: int)",
25+
"helpers_run_body": " implementation = solution_class()\n return implementation.two_sum(nums, target)",
26+
"helpers_assert_name": "two_sum",
27+
"helpers_assert_signature": "(result: list[int], expected: list[int]) -> bool",
28+
"helpers_assert_body": " assert result == expected\n return True",
29+
2130
"solution_imports": "",
31+
"solution_contents": "",
32+
"solution_class_content": "",
33+
34+
"test_imports": "import pytest\nfrom leetcode_py.test_utils import logged_test\nfrom .helpers import assert_two_sum, run_two_sum\nfrom .solution import Solution",
35+
"test_content": "",
36+
"test_class_name": "TwoSum",
37+
"test_class_content": " def setup_method(self):\n self.solution = Solution()",
2238
"_solution_methods": {
2339
"list": [
2440
{
2541
"name": "two_sum",
26-
"parameters": "nums: list[int], target: int",
27-
"return_type": "list[int]",
28-
"dummy_return": "[]"
42+
"signature": "(self, nums: list[int], target: int) -> list[int]",
43+
"body": " # TODO: Implement two_sum\n return []"
2944
}
3045
]
3146
},
32-
33-
"test_imports": "import pytest\nfrom loguru import logger\nfrom leetcode_py.test_utils import logged_test\nfrom .solution import Solution",
34-
"test_class_name": "TwoSum",
3547
"_test_helper_methods": {
3648
"list": [
3749
{
@@ -45,16 +57,16 @@
4557
"list": [
4658
{
4759
"name": "test_two_sum",
60+
"signature": "(self, nums: list[int], target: int, expected: list[int])",
4861
"parametrize": "nums, target, expected",
49-
"parametrize_typed": "nums: list[int], target: int, expected: list[int]",
5062
"test_cases": "[([2, 7, 11, 15], 9, [0, 1]), ([3, 2, 4], 6, [1, 2])]",
51-
"body": "result = self.solution.two_sum(nums, target)\nassert result == expected"
63+
"body": " result = run_two_sum(Solution, nums, target)\n assert_two_sum(result, expected)"
5264
}
5365
]
5466
},
5567

56-
"playground_imports": "from solution import Solution",
57-
"playground_test_case": "# Example test case\nnums = [2, 7, 11, 15]\ntarget = 9\nexpected = [0, 1]",
58-
"playground_execution": "result = Solution().two_sum(nums, target)\nresult",
59-
"playground_assertion": "assert result == expected"
68+
"playground_imports": "from helpers import run_two_sum, assert_two_sum\nfrom solution import Solution",
69+
"playground_setup": "# Example test case\nnums = [2, 7, 11, 15]\ntarget = 9\nexpected = [0, 1]",
70+
"playground_run": "result = run_two_sum(Solution, nums, target)\nresult",
71+
"playground_assert": "assert_two_sum(result, expected)"
6072
}

0 commit comments

Comments
 (0)