4o -> 4.1 ; 4.1 -> 5.1 ; 4o-mini -> 4.1-mini#3456
Conversation
There was a problem hiding this comment.
Code Review
This pull request updates the language model versions used in the application. The changes are consistent with the description. I've added one suggestion to refactor the model initializations by using constants for model names. This will improve maintainability and prevent potential inconsistencies, especially since some models are now shared between different clients.
| llm_mini = ChatOpenAI(model='gpt-4.1-mini') | ||
| llm_mini_stream = ChatOpenAI(model='gpt-4.1-mini', streaming=True) | ||
| llm_large = ChatOpenAI(model='o1-preview') | ||
| llm_large_stream = ChatOpenAI(model='o1-preview', streaming=True, temperature=1) | ||
| llm_high = ChatOpenAI(model='o4-mini') | ||
| llm_high_stream = ChatOpenAI(model='o4-mini', streaming=True, temperature=1) | ||
| llm_medium = ChatOpenAI(model='gpt-4o') | ||
| llm_medium_experiment = ChatOpenAI(model='gpt-4.1') | ||
| llm_medium_stream = ChatOpenAI(model='gpt-4o', streaming=True) | ||
| llm_medium = ChatOpenAI(model='gpt-4.1') | ||
| llm_medium_stream = ChatOpenAI(model='gpt-4.1', streaming=True) | ||
| llm_medium_experiment = ChatOpenAI(model='gpt-5.1') | ||
|
|
||
| # Specialized models for agentic workflows | ||
| llm_agent = ChatOpenAI(model='gpt-4.1') | ||
| llm_agent_stream = ChatOpenAI(model='gpt-4.1', streaming=True) | ||
| llm_agent = ChatOpenAI(model='gpt-5.1') | ||
| llm_agent_stream = ChatOpenAI(model='gpt-5.1', streaming=True) |
There was a problem hiding this comment.
To improve maintainability and reduce the risk of inconsistencies, it's a good practice to define model names as constants and reuse them. This pull request is about updating model versions, and having the model names duplicated for streaming and non-streaming clients (e.g., llm_mini and llm_mini_stream) increases the chance of error during such updates. I also noticed that llm_medium_experiment and llm_agent now use the same model, which can also be captured by a shared constant. By using constants, you only need to update the model name in one place.
# Model name constants
MODEL_MINI = 'gpt-4.1-mini'
MODEL_LARGE = 'o1-preview'
MODEL_HIGH = 'o4-mini'
MODEL_MEDIUM = 'gpt-4.1'
MODEL_AGENTIC = 'gpt-5.1' # Used for experiment and agent
# Base models for general use
llm_mini = ChatOpenAI(model=MODEL_MINI)
llm_mini_stream = ChatOpenAI(model=MODEL_MINI, streaming=True)
llm_large = ChatOpenAI(model=MODEL_LARGE)
llm_large_stream = ChatOpenAI(model=MODEL_LARGE, streaming=True, temperature=1)
llm_high = ChatOpenAI(model=MODEL_HIGH)
llm_high_stream = ChatOpenAI(model=MODEL_HIGH, streaming=True, temperature=1)
llm_medium = ChatOpenAI(model=MODEL_MEDIUM)
llm_medium_stream = ChatOpenAI(model=MODEL_MEDIUM, streaming=True)
llm_medium_experiment = ChatOpenAI(model=MODEL_AGENTIC)
# Specialized models for agentic workflows
llm_agent = ChatOpenAI(model=MODEL_AGENTIC)
llm_agent_stream = ChatOpenAI(model=MODEL_AGENTIC, streaming=True)model upgrades: 4o -> 4.1 ; 4.1 -> 5.1 ; 4o-mini -> 4.1-mini
model upgrades:
4o -> 4.1 ;
4.1 -> 5.1 ;
4o-mini -> 4.1-mini