Skip to content

Chat2AnyLLM/llm-config

Repository files navigation

llm-config

LLM configuration tools for managing LiteLLM, Azure AI, GitHub Copilot, and aichat setups.

Architecture Overview

This repository provides a comprehensive toolkit for managing Large Language Model (LLM) configurations across multiple platforms:

  • litellm_config_gen.py: Synchronizes GitHub Copilot models with LiteLLM configuration and merges external model definitions from lllm.conf
  • check_llm.py: Azure AI Services deployment inventory tool for listing and comparing Azure models with LiteLLM configurations
  • aichat_config_gen.py: Generates aichat configuration files from LiteLLM API responses
  • models.py: GitHub Copilot authentication and model fetching utilities

The tools work together to provide end-to-end LLM configuration management: Copilot models are fetched and synchronized into LiteLLM config, Azure deployments are inventoried and compared, and aichat configurations are generated for seamless integration.

Upstream Models

This toolkit can pull models from multiple upstream sources:

GitHub Copilot Models

  • Dynamically fetches available models from GitHub Copilot based on your account access
  • Supports both individual and organization-level Copilot access

Azure AI Services Models

  • Discovers deployed models in your Azure AI Services subscriptions
  • Supports both Azure OpenAI and Azure AI Services (formerly Cognitive Services) models

External Provider Models

The toolkit supports external providers through the lllm.conf configuration file:

  • Alibaba DashScope:
    • glm-4.5
    • Moonshot-Kimi-K2-Instruct
    • deepseek-v3.1
    • deepseek-v3.2-exp
    • llama-4-maverick-17b-128e-instruct
    • qwen3-coder-plus
    • qwen3-max

Fallback Models

When network access is unavailable, the toolkit includes a static list of common OpenAI models:

  • GPT-4 series (gpt-4, gpt-4-0613, gpt-4-0125-preview)
  • GPT-3.5 series (gpt-3.5-turbo, gpt-3.5-turbo-0613)
  • GPT-4o series (gpt-4o-mini, gpt-4o-2024-05-13, gpt-4o-2024-08-06)
  • O-series models (o3-mini, o3-2025-04-16, o4-mini-2025-04-16, gpt-4.1-2025-04-14)

Supported Client Tools

The toolkit generates configurations for various LLM client tools:

aichat

Generates configuration files for aichat, supporting:

  • Azure OpenAI models
  • Google Gemini models
  • OpenAI models
  • Moonshot models
  • Other providers through OpenAI-compatible API interfaces

Setup

  1. Create a ".env" file based on the ".env.template" file:

    cp .env.template .env

  2. Update the ".env" file with your actual values (see ".env.template" for required variables):

    LiteLLM API Configuration

    LITELLM_API_KEY=your_actual_api_key_here LITELLM_API_URL=http://localhost:4141/v1/model/info LITELLM_API_BASE=http://localhost:4141

    Output file path

    AICHAT_CONFIG_PATH=/path/to/your/aichat/config.yaml

    Azure Configuration

    AZURE_SUBSCRIPTION_ID=your_azure_subscription_id

  3. Install the required Python packages:

    pip install -r requirements.txt

    Or manually:

    pip install python-dotenv requests azure-identity azure-mgmt-cognitiveservices ruamel.yaml

Usage

litellm_config_gen.py

Synchronizes GitHub Copilot models in "config.yaml" with models available to the authenticated GitHub CLI user, and merges additional models from "lllm.conf".

python litellm_config_gen.py [options]

Options:

  • "--static-only": Skip network/CLI; use static list only
  • "--no-cli": Do not use gh CLI fallback
  • "--no-http": Disable HTTP /models attempt
  • "--quiet": Suppress debug output
  • "--verbose": Enable debug output
  • "--json": Emit JSON summary to stdout
  • "--cache-file PATH": Path to cache file for model list
  • "--cache-ttl SECONDS": Seconds cache file remains valid
  • "--dry-run": Compute changes but do not write config.yaml
  • "--output PATH": Path to config file to read/write (default: config.yaml)

check_llm.py

Azure AI Services Deployment Inventory tool. Lists Azure models, LiteLLM models, or compares them.

python check_llm.py [options]

Options:

  • "-d, --debug": Enable debug logging
  • "-c, --compare": Compare with LiteLLM models
  • "--type {azure,litellm}": Specify the type of models to list
  • "--all-subscriptions": Retrieve resources from all subscriptions

Examples:

  • List Azure models: "python check_llm.py --type azure"
  • List LiteLLM models: "python check_llm.py --type litellm"
  • Compare models: "python check_llm.py --compare"

lllm.conf.example

Example configuration file for LLM providers and models. Copy this to "lllm.conf" and customize for your needs.

cp lllm.conf.example lllm.conf

Edit lllm.conf with your provider configurations and models

The file contains provider definitions and model mappings for services like Alibaba DashScope.

aichat_config_gen.py

Generates an aichat configuration file from the LiteLLM API.

python aichat_config_gen.py [options]

Options:

  • "--api-base URL": LiteLLM API base URL
  • "--api-key KEY": LiteLLM API key
  • "--model MODEL": Default model to use in config
  • "--insecure": Disable SSL certificate verification

If options are not provided, the script will prompt for missing values or use environment variables from ".env".

Troubleshooting

Common Azure/LiteLLM Authentication Issues

Azure Authentication Problems:

  • Ensure AZURE_SUBSCRIPTION_ID is set correctly in your .env file
  • Verify Azure CLI authentication: az login
  • Check that your account has appropriate permissions for Azure AI Services

LiteLLM API Connection Issues:

  • Verify LITELLM_API_URL and LITELLM_API_BASE point to a running LiteLLM instance
  • Check LITELLM_API_KEY is valid and has necessary permissions
  • Use --verbose flag for detailed error information

GitHub Copilot Authentication:

  • Ensure GitHub CLI is installed and authenticated: gh auth login
  • Verify Copilot access is enabled for your account/organization

Configuration File Relationships

  • config.yaml: Main LiteLLM configuration file generated by litellm_config_gen.py
  • lllm.conf: External model definitions merged into config.yaml (copy from lllm.conf.example)
  • aichat/config.yaml: aichat-specific configuration generated by aichat_config_gen.py

Extending Support for Additional LLM Providers

The modular design allows easy extension:

  1. Add new provider configurations to lllm.conf following the existing format
  2. Update model mapping logic in litellm_config_gen.py if needed
  3. Test with check_llm.py for inventory verification

Examples

  1. Generate LiteLLM config with GitHub Copilot models:

    python litellm_config_gen.py --verbose

  2. Check available Azure models:

    python check_llm.py --type azure

  3. Set up aichat config:

    python aichat_config_gen.py --api-base http://localhost:4141 --model gpt-4

About

llm configuration

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages