A Streamlit-based chat interface that works with LM Studio for local AI interactions. This project provides a ChatGPT-like experience using your own locally hosted language models.
- Clean, intuitive chat interface
- Real-time streaming responses
- Local model support via LM Studio
- Message history persistence during session
- Error handling for connection issues
- Python 3.8 or higher
- LM Studio installed on your machine
- A compatible language model loaded in LM Studio
-
Clone this repository:
git clone https://github.com/Hunter-pro/RAG-chatbot_MLH.git cd RAG-chatbot_MLH -
Install the required dependencies:
pip install -r requirements.txt
- Start LM Studio and load your preferred language model
- Ensure LM Studio's server is running on port 1234 (default port)
- Run the Streamlit app:
streamlit run pages/😊AI_Chat.py
- Open your browser and navigate to the Streamlit app (typically
http://localhost:8501) - Start chatting with your local AI model
- The chat history will be maintained during your session
- LM Studio must be running with a loaded model before starting the chat interface
- The server runs on
http://localhost:1234by default - No API key is required as this uses local models
If you encounter issues:
- Verify LM Studio is running and a model is loaded
- Check if the LM Studio server is running on port 1234
- Look for error messages in the Streamlit interface
- Ensure all dependencies are correctly installed
This project is open source and available under the MIT License.
Contributions, issues, and feature requests are welcome! Feel free to check the issues page.
Built with Streamlit and LM Studio for local AI interactions 🤖