ChatwithLLM is a collection of Streamlit-based chat interfaces for various Large Language Models (LLMs). This repository demonstrates how to create simple, interactive chat applications using different AI providers and models.
chat_with_ollama.py
: Chat interface for local LLMs using Ollama.chat_with_claude.py
: Chat interface for Anthropic's Claude 3 Opus model.chat_with_gpt.py
: Chat interface for OpenAI's GPT models.chat_with_groq.py
: Chat interface for Groq's LLM models.
- 💬 Interactive chat interfaces for multiple AI models
- 🖥️ Local LLM support with Ollama
- ☁️ Cloud-based LLM support with Anthropic, OpenAI, and Groq
- 🔄 Conversation history management
- 🌊 Streaming responses for a more dynamic chat experience
-
Clone the repository:
git clone https://github.com/nhtkid/chatwithLLM.git cd chatwithLLM
-
Install the required dependencies for all scripts:
pip install streamlit anthropic python-dotenv openai groq
-
For the Ollama script, make sure you have Ollama installed and running locally.
- Create a
.env
file in the root directory of the project. - Add your API keys to the
.env
file:ANTHROPIC_API_KEY=your_anthropic_api_key_here OPENAI_API_KEY=your_openai_api_key_here GROQ_API_KEY=your_groq_api_key_here
To run any of the chat interfaces, use the Streamlit CLI:
-
For Ollama chat:
streamlit run chat_with_ollama.py
-
For Claude chat:
streamlit run chat_with_claude.py
-
For GPT chat:
streamlit run chat_with_gpt.py
-
For Groq chat:
streamlit run chat_with_groq.py
Each script will launch a Streamlit app in your default web browser, where you can interact with the chosen LLM.
- Uses a local Ollama server to run LLMs
- Default model: phi3 (can be changed in the script)
- Streams responses for a more dynamic chat experience
- Interfaces with Anthropic's Claude 3 Opus model
- Uses the latest Claude model available
- Streams responses for real-time interaction
- Connects to OpenAI's GPT models
- Default model: gpt-4-turbo (can be changed in the script)
- Includes a system message to set the assistant's behavior
- Utilizes Groq's inference engine for various LLMs
- Default model: llama3-70b-8192 (can be changed in the script)
- Provides fast inference for supported models
Contributions, issues, and feature requests are welcome!
This project is open source and available under the MIT License.