Skip to content

Add different models for litellm, resolve conflicts#148

Merged
lfnothias merged 2 commits intodevfrom
dev_madina
Feb 21, 2025
Merged

Add different models for litellm, resolve conflicts#148
lfnothias merged 2 commits intodevfrom
dev_madina

Conversation

@madina1203
Copy link
Collaborator

@madina1203 madina1203 commented Feb 20, 2025

PR Type

enhancement


Description

  • Introduced new models for litellm integration.

  • Enhanced workflow creation with model management.

  • Improved API key handling for various providers.

  • Updated configuration for litellm models.


Changes walkthrough 📝

Relevant files
Enhancement
evaluation.py
Update evaluation workflow with new model handling             

app/core/evaluation.py

  • Added link_kg_database and llm_creation imports.
  • Updated workflow creation to use models.
  • Changed variable names for clarity.
  • +12/-6   
    main.py
    Enhance main module with API key management                           

    app/core/main.py

  • Added get_api_key and create_litellm_model functions.
  • Improved API key retrieval for multiple providers.
  • Updated llm_creation to support litellm models.
  • +142/-72
    langraph_workflow.py
    Refactor workflow creation for model integration                 

    app/core/workflow/langraph_workflow.py

  • Modified create_workflow to accept models.
  • Removed redundant API key parameter.
  • Streamlined workflow creation process.
  • +5/-7     
    params.ini
    Update configuration for litellm models                                   

    app/config/params.ini

  • Added configurations for multiple litellm models.
  • Updated model IDs and parameters.
  • +12/-3   
    Dependencies
    environment.yml
    Update dependencies for compatibility                                       

    environment.yml

    • Updated tiktoken dependency version.
    +1/-1     

    Need help?
  • Type /help how to ... in the comments thread for any questions about PR-Agent usage.
  • Check out the documentation for more information.
  • @github-actions
    Copy link
    Contributor

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Possible Issue

    The function create_litellm_model raises a ValueError if the model id is not present in the configuration. This could potentially cause the application to crash if the configuration is not properly set. Consider adding error handling or default values to prevent this.

    def create_litellm_model(config: configparser.SectionProxy) -> ChatLiteLLM:
        """
        Create a ChatLiteLLM instance based on the model id and configuration.
        Only uses parameters that are explicitly specified in the configuration.
    
        Args:
            config (configparser.SectionProxy): The configuration section
    
        Returns:
            ChatLiteLLM: Configured ChatLiteLLM instance
        """
        if "id" not in config:
            raise ValueError("Model id is required in configuration")
    Configuration Handling

    The llm_creation function reads from a configuration file but does not handle cases where expected configuration sections or keys are missing. This could lead to runtime errors. Consider adding validation for the configuration file.

    def llm_creation(api_key=None, params_file=None):
        """
        Reads the parameters from the configuration file (default is params.ini) and initializes the language models.
    
        Args:
            api_key (str, optional): The API key for the OpenAI API.
            params_file (str, optional): Path to an alternate configuration file.
    
        Returns:
            dict: A dictionary containing the language models.
        """
    
        config = configparser.ConfigParser()
        if params_file:
            config.read(params_file)
        else:
            config.read(params_path)
    
        models = {}
    
        # Get the OpenAI API key from the configuration file or the environment variables if none is passed.
        openai_api_key = api_key if api_key else os.getenv("OPENAI_API_KEY")
    
        for section in config.sections():
            if section.startswith("llm_litellm"):
                models[section] = create_litellm_model(config[section])
                continue
    
            temperature = config[section]["temperature"]
            model_id = config[section]["id"]
            max_retries = config[section]["max_retries"]
    
    
            provider = "openai"
            if section.startswith("deepseek"):
                provider = "deepseek"
            elif section.startswith("ovh"):
                provider = "ovh"
    
    
            api_key = get_api_key(provider)
    
    
            model_params = {
                "temperature": float(temperature),
                "model": model_id,
                "max_retries": int(max_retries),
                "verbose": True
            }
    
    
            if "base_url" in config[section]:
                base_url = config[section]["base_url"]
                if provider == "deepseek":
                    model_params["openai_api_base"] = base_url
                    model_params["openai_api_key"] = api_key
                else:
                    model_params["base_url"] = base_url
                    model_params["api_key"] = api_key
            else:
                model_params["openai_api_key"] = api_key
    
            llm = ChatOpenAI(**model_params)
            models[section] = llm
    
        return models

    @github-actions
    Copy link
    Contributor

    github-actions bot commented Feb 20, 2025

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    General
    Validate configuration values' ranges

    Validate that the temperature and max_retries values in the configuration are within
    acceptable ranges to prevent unexpected behavior.

    app/core/main.py [179-180]

    -temperature = config[section]["temperature"]
    -max_retries = config[section]["max_retries"]
    +temperature = float(config[section]["temperature"])
    +if not (0.0 <= temperature <= 1.0):
    +    raise ValueError("Temperature must be between 0.0 and 1.0")
    +max_retries = int(config[section]["max_retries"])
    +if max_retries < 0:
    +    raise ValueError("Max retries must be non-negative")
    Suggestion importance[1-10]: 9

    __

    Why: Validating the ranges of temperature and max_retries ensures that the application behaves as expected and prevents potential errors due to invalid configuration values. This enhances the reliability and stability of the application.

    High
    Add error handling for config read

    Add error handling for the config.read() method to handle cases where the
    configuration file might not be found or is unreadable.

    app/core/main.py [164]

    -config.read(params_file)
    +if not config.read(params_file):
    +    raise FileNotFoundError(f"Configuration file {params_file} not found or unreadable.")
    Suggestion importance[1-10]: 8

    __

    Why: Adding error handling for the config.read() method is important to handle cases where the configuration file might be missing or unreadable, which can prevent the application from crashing unexpectedly.

    Medium
    Possible issue
    Ensure params_path is defined

    Ensure that the params_path variable is defined before it is used in the
    llm_creation function to avoid potential runtime errors.

    app/core/main.py [166]

    -config.read(params_path)
    +config.read(params_path)  # Ensure params_path is defined
    Suggestion importance[1-10]: 7

    __

    Why: Ensuring that params_path is defined before use is crucial to prevent runtime errors, especially since the code relies on this variable to read configuration files. This suggestion helps improve the robustness of the function.

    Medium

    @lfnothias lfnothias merged commit b9444e4 into dev Feb 21, 2025
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    2 participants