The Agent clsas coordinates between the LLM, external tools, and internal modules. I use iterative refinement where the agent operates in a loop to observe tool outputs and correct course dynamically. There is a clear separation between the LLM and tool execution.
Here, I used streamlit to use the tool in a web environment. Session states are uses to persist the agent's memory and st.status blocks provide real-time feedback during tool executions so that users are aware of what steps the agent is taking during tasks.
Instead of just truncating old messages, the context manager uses a sliding window with summarization. This preserves essential context and architectural decisions while keeping the prompt size optimized for the current iteration's performance.
This forces explicit user consent for file system or system-level changes.
The LLM is required to split broad requests into a structured, JSON formatted list of subtasks. It creates a logical roadmap by treating the agent as a PM, which helps to reduce hallucinations and missed requirements.
Provides the agent with semantic search capabilities by indexing the codebase using AST-based chunking. It identifies logical blocks like functions and classes to ensure the LLM receives complete and relevant code segments.
- Ensure you have
streamlitinstalled (Use therequirements.txt) - Launch the app using the command:
python -m streamlit run app.py
- Ensure your API keys and
GOOGLE_CLOUD_PROJECTare setup properly - In the directory where pyproject.toml is located, run
pip install -e. - Try any of the following:
simplecoder "create a hello.py file"simplecoder --use-rag "what does the Agent class do?"simplecoder --use-planning "create a web server with routes for home and about"