A Python-based AI coding assistant that provides intelligent code suggestions and programming help through a user-friendly Gradio web interface. The application connects to a local Ollama server running the "codeguru" model to generate contextual responses for coding queries.
- AI-Powered Code Assistant: Leverages the "codeguru" model for intelligent coding suggestions
- Conversation History: Maintains context across multiple interactions
- Web Interface: Clean and intuitive Gradio-based UI
- Real-time Responses: Instant feedback for your coding queries
- Local Processing: Runs entirely on your local machine for privacy
Before running the SmartCode Assistant, ensure you have the following installed:
- Python
- Ollama - Install Ollama
requests- For HTTP API callsgradio- For the web interfacejson- For data serialization (built-in)
-
Clone or download the project files
git clone <git remote add origin https://github.com/Atish019/SmartCode-Assistant.git> cd SmartCode-Assistant
-
Install Python dependencies
pip install -r requirements.txt
Or install manually:
pip install requests gradio
-
Install and setup Ollama
- Download and install Ollama from ollama.ai
- Pull the codeguru model:
ollama pull codeguru
-
Start the Ollama server
ollama serve
This will start the Ollama API server on
http://localhost:11434 -
Run the SmartCode Assistant
python app.py
-
Access the web interface
- The Gradio interface will launch automatically
- Open your browser and navigate to the provided local URL (typically
http://127.0.0.1:7860)
-
Start coding with AI assistance
- Enter your coding questions, problems, or requests in the text area
- Click submit to get AI-generated responses
- The assistant maintains conversation history for better context
SmartCode-Assistant/
├── venv/ # Virtual environment (optional)
├── app.py # Main application file
├── modelfile # Ollama model configuration
├── requirements.txt # Python dependencies
└── README.md # This file
The application is configured to connect to Ollama at http://localhost:11434/api/generate. If your Ollama server runs on a different port or address, modify the url variable in app.py:
url = "http://your-ollama-host:port/api/generate"- "How do I implement a binary search algorithm in Python?"
- "Explain the difference between lists and tuples"
- "Write a function to validate email addresses"
- "Help me debug this sorting algorithm"
- "What's the best way to handle exceptions in Python?"
- "error: ": Check the console output for detailed error information
- HTTP 404: Verify the Ollama API endpoint is correct
- HTTP 500: Check if the specified model exists and is accessible
Happy Coding with SmartCode-Assistant!