Skip to content

refactor: Make LLM and Embedding Model Configurable via YAML #108

@AaryanCode69

Description

@AaryanCode69

Currently, the LLM and embedding model are hardcoded in AgentGraph.__init__():

get_llm("openai", "gpt-4o-mini")
get_embedding("openai", "text-embedding-3-large")

This limits flexibility for:

  • Experimenting with alternative providers (e.g., Ollama, HuggingFace)
  • Running different embedding models in different environments
  • Future extensibility (e.g., MCP-backed retrieval setups)

Would it make sense to expose these via the existing YAML config system (e.g., llm.provider, llm.model, embedding.provider, embedding.model) while keeping current defaults?

If this aligns with the project direction, I’d be happy to open a small PR implementing it with full backward compatibility.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions