Skip to content

Can the LLM engine be replaced by litellm? #219

@linhaowei1

Description

@linhaowei1

Different LLMs have varying APIs. For example, the o-series requires the "max_completion_tokens" parameter and does not support "temperature," but this is currently hard-coded in the code. When I tested GPT-5, it followed the same format as the o-series, so we had to add a specific rule to accommodate it. If users rely on self-hosted models or APIs from specialized providers, managing the LLM engines could become quite messy. I suggest that OpenEvolve could integrate a unified LLM proxy like LiteLLM, which can automatically handle different request formats appropriately.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions