A Python code agent built with LangGraph that writes solutions for specific LeetCode problems, runs test cases, and self-corrects its solutions.
This project uses the Hugging Face dataset newfacade/LeetCodeDataset as the source for the problems including the starter code, entry points, and tests.
- Problems are loaded from the
testsplit.
- Python 3.12+
- One provider:
- Ollama
- OpenAI-compatible local server
- MLX LM server (
mlx_lm.server)
- A model supported by the chosen provider (default:
llama3.1)
Example for Ollama:
ollama pull llama3.1Example for MLX:
pip install mlx-lm
mlx_lm.server --model mlx-community/Llama-3.1-8B-InstructUsing uv:
uv syncOr using pip:
python -m venv .venv
source .venv/bin/activate
pip install -e .Run:
debugging-code-agentOr with the module:
python -m debugging_code_agentOllama (default):
debugging-code-agent --provider ollama --model llama3.1OpenAI-compatible local server:
debugging-code-agent --provider server --base-url http://localhost:8000/v1 --model Qwen/Qwen2.5-Coder-7B-InstructMLX LM server (defaults to http://127.0.0.1:8080/v1):
debugging-code-agent --provider mlx --model mlx-community/Llama-3.1-8B-Instruct--provider {ollama,server,mlx} LLM provider (default: ollama)
--model MODEL Model name for the selected provider (default: llama3.1)
--temperature TEMPERATURE Sampling temperature (default: 0.1)
--base-url BASE_URL Base URL for server/mlx providers
--api-key API_KEY Optional API key for server/mlx providers
--max-attempts MAX_ATTEMPTS Maximum solve attempts per problem (default: 5)
- LangGraph — agent graph and state management
- Ollama — local LLM inference
- LangChain Chat — OpenAI-compatible server support
- Textual — terminal UI for problem selection
- Hugging Face Datasets — problem source