This project sets up a ReAct‑style agent that can:
- Read user prompts
- Use a vector index over your
data/folder (including PDFs) viaQueryEngineTool - Read raw code files from
data/viacode_reader - Generate new code with a dedicated LLM (
codellama) - Parse the LLM’s output into JSON (with code, description, filename)
- Save the generated code into
output/
flowchart LR
subgraph User
A[Enter Prompt]
end
subgraph Agent
direction TB
B[ReActAgent]
B -->|calls| C(code_reader)
B -->|calls| D(QueryEngineTool)
C --> E[reads data/<file>]
D --> F[VectorStoreIndex over data/]
F --> G["LLM (llama3.2) for docs"]
B --> H["LLM (codellama) for code"]
end
subgraph Post-Processing
H --> I[PydanticOutputParser]
I --> J["JSON {code, description, filename}"]
J --> K[write to output/]
end
A --> B
# create and activate a venv
python3 -m venv .venv
source .venv/bin/activate # on Windows use: .venv\Scripts\activate
# install dependencies
pip install -r requirements.txt- Populate
data/with code files or PDFs. - Ensure your
.envhasLLAMA_CLOUD_API_KEY. - Run
python main.py, enter prompts when prompted. - Generated code appears in
output/with the returned filename.
Copy‑paste the prompt below into the running agent to generate a client script:
Read the contents of data/test.py and write a Python script that calls the POST /items endpoint to create a new item.