LLM configuration tools for managing LiteLLM, Azure AI, GitHub Copilot, and aichat setups.
This repository provides a comprehensive toolkit for managing Large Language Model (LLM) configurations across multiple platforms:
litellm_config_gen.py: Synchronizes GitHub Copilot models with LiteLLM configuration and merges external model definitions fromlllm.confcheck_llm.py: Azure AI Services deployment inventory tool for listing and comparing Azure models with LiteLLM configurationsaichat_config_gen.py: Generates aichat configuration files from LiteLLM API responsesmodels.py: GitHub Copilot authentication and model fetching utilities
The tools work together to provide end-to-end LLM configuration management: Copilot models are fetched and synchronized into LiteLLM config, Azure deployments are inventoried and compared, and aichat configurations are generated for seamless integration.
This toolkit can pull models from multiple upstream sources:
- Dynamically fetches available models from GitHub Copilot based on your account access
- Supports both individual and organization-level Copilot access
- Discovers deployed models in your Azure AI Services subscriptions
- Supports both Azure OpenAI and Azure AI Services (formerly Cognitive Services) models
The toolkit supports external providers through the lllm.conf configuration file:
- Alibaba DashScope:
- glm-4.5
- Moonshot-Kimi-K2-Instruct
- deepseek-v3.1
- deepseek-v3.2-exp
- llama-4-maverick-17b-128e-instruct
- qwen3-coder-plus
- qwen3-max
When network access is unavailable, the toolkit includes a static list of common OpenAI models:
- GPT-4 series (gpt-4, gpt-4-0613, gpt-4-0125-preview)
- GPT-3.5 series (gpt-3.5-turbo, gpt-3.5-turbo-0613)
- GPT-4o series (gpt-4o-mini, gpt-4o-2024-05-13, gpt-4o-2024-08-06)
- O-series models (o3-mini, o3-2025-04-16, o4-mini-2025-04-16, gpt-4.1-2025-04-14)
The toolkit generates configurations for various LLM client tools:
Generates configuration files for aichat, supporting:
- Azure OpenAI models
- Google Gemini models
- OpenAI models
- Moonshot models
- Other providers through OpenAI-compatible API interfaces
-
Create a ".env" file based on the ".env.template" file:
cp .env.template .env
-
Update the ".env" file with your actual values (see ".env.template" for required variables):
LITELLM_API_KEY=your_actual_api_key_here LITELLM_API_URL=http://localhost:4141/v1/model/info LITELLM_API_BASE=http://localhost:4141
AICHAT_CONFIG_PATH=/path/to/your/aichat/config.yaml
AZURE_SUBSCRIPTION_ID=your_azure_subscription_id
-
Install the required Python packages:
pip install -r requirements.txt
Or manually:
pip install python-dotenv requests azure-identity azure-mgmt-cognitiveservices ruamel.yaml
Synchronizes GitHub Copilot models in "config.yaml" with models available to the authenticated GitHub CLI user, and merges additional models from "lllm.conf".
python litellm_config_gen.py [options]
Options:
- "--static-only": Skip network/CLI; use static list only
- "--no-cli": Do not use gh CLI fallback
- "--no-http": Disable HTTP /models attempt
- "--quiet": Suppress debug output
- "--verbose": Enable debug output
- "--json": Emit JSON summary to stdout
- "--cache-file PATH": Path to cache file for model list
- "--cache-ttl SECONDS": Seconds cache file remains valid
- "--dry-run": Compute changes but do not write config.yaml
- "--output PATH": Path to config file to read/write (default: config.yaml)
Azure AI Services Deployment Inventory tool. Lists Azure models, LiteLLM models, or compares them.
python check_llm.py [options]
Options:
- "-d, --debug": Enable debug logging
- "-c, --compare": Compare with LiteLLM models
- "--type {azure,litellm}": Specify the type of models to list
- "--all-subscriptions": Retrieve resources from all subscriptions
Examples:
- List Azure models: "python check_llm.py --type azure"
- List LiteLLM models: "python check_llm.py --type litellm"
- Compare models: "python check_llm.py --compare"
Example configuration file for LLM providers and models. Copy this to "lllm.conf" and customize for your needs.
cp lllm.conf.example lllm.conf
The file contains provider definitions and model mappings for services like Alibaba DashScope.
Generates an aichat configuration file from the LiteLLM API.
python aichat_config_gen.py [options]
Options:
- "--api-base URL": LiteLLM API base URL
- "--api-key KEY": LiteLLM API key
- "--model MODEL": Default model to use in config
- "--insecure": Disable SSL certificate verification
If options are not provided, the script will prompt for missing values or use environment variables from ".env".
Azure Authentication Problems:
- Ensure
AZURE_SUBSCRIPTION_IDis set correctly in your.envfile - Verify Azure CLI authentication:
az login - Check that your account has appropriate permissions for Azure AI Services
LiteLLM API Connection Issues:
- Verify
LITELLM_API_URLandLITELLM_API_BASEpoint to a running LiteLLM instance - Check
LITELLM_API_KEYis valid and has necessary permissions - Use
--verboseflag for detailed error information
GitHub Copilot Authentication:
- Ensure GitHub CLI is installed and authenticated:
gh auth login - Verify Copilot access is enabled for your account/organization
config.yaml: Main LiteLLM configuration file generated bylitellm_config_gen.pylllm.conf: External model definitions merged intoconfig.yaml(copy fromlllm.conf.example)aichat/config.yaml: aichat-specific configuration generated byaichat_config_gen.py
The modular design allows easy extension:
- Add new provider configurations to
lllm.conffollowing the existing format - Update model mapping logic in
litellm_config_gen.pyif needed - Test with
check_llm.pyfor inventory verification
-
Generate LiteLLM config with GitHub Copilot models:
python litellm_config_gen.py --verbose
-
Check available Azure models:
python check_llm.py --type azure
-
Set up aichat config:
python aichat_config_gen.py --api-base http://localhost:4141 --model gpt-4