Currently, the LLM and embedding model are hardcoded in AgentGraph.__init__():
get_llm("openai", "gpt-4o-mini")
get_embedding("openai", "text-embedding-3-large")
This limits flexibility for:
- Experimenting with alternative providers (e.g., Ollama, HuggingFace)
- Running different embedding models in different environments
- Future extensibility (e.g., MCP-backed retrieval setups)
Would it make sense to expose these via the existing YAML config system (e.g., llm.provider, llm.model, embedding.provider, embedding.model) while keeping current defaults?
If this aligns with the project direction, I’d be happy to open a small PR implementing it with full backward compatibility.
Currently, the LLM and embedding model are hardcoded in
AgentGraph.__init__():This limits flexibility for:
Would it make sense to expose these via the existing YAML config system (e.g.,
llm.provider,llm.model,embedding.provider,embedding.model) while keeping current defaults?If this aligns with the project direction, I’d be happy to open a small PR implementing it with full backward compatibility.