This repository demonstrates the basics of working with multiple Large Language Model (LLM) providers. It shows how to send requests to different models in both streaming and non-streaming modes.
- Anthropic (Claude)
- Azure OpenAI
- Google Gemini
- Mistral
- OpenAI
anthropic/
→ examples with Claude modelsazure_open_ai/
→ examples with Azure-hosted OpenAI modelsgoogle/
→ examples with Gemini modelsmistral/
→ examples with Mistral modelsopen_ai/
→ examples with OpenAI modelsdev_chat/
→ interactive developer chat with provider selection and transcripts.env.example
→ template for environment variables
Each provider directory contains scripts for:
- Streaming responses (real-time output)
- Non-streaming responses (full output returned at once)
The providers/
folder includes lightweight provider classes for each API (Anthropic, OpenAI, Azure, Mistral, Google Gemini). They unify the usage, so you can plug them into the dev_chat
interface.
-
Clone the repository.
-
Create a Python virtual environment:
python -m venv .venv
-
Activate the virtual environment:
-
On Linux/macOS:
source .venv/bin/activate
-
On Windows (PowerShell):
.venv\Scripts\Activate.ps1
-
-
Install dependencies:
pip install -r requirements.txt
-
Copy
.env.example
to.env
and fill in your API keys.
The dev_chat
tool allows you to interactively chat with any supported provider. Features include:
- Provider selection at runtime
- Support for conversation history
- 10-turn session limit
- Markdown transcript export
Run the chat:
python dev_chat/dev_chat.py
or (if installed globally):
py dev_chat/dev_chat.py
anthropic
openai
mistralai
google-genai
python-dotenv
rich
Examples inspired by and adapted from official SDK documentation of:
- OpenAI
- Azure OpenAI
- Anthropic
- Google Generative AI
- Mistral