Skip to content

Revert "uses gpt-5, gpt-5 mini"#3444

Merged
beastoin merged 1 commit intomainfrom
revert-3443-mz7ba_gpt_5
Nov 17, 2025
Merged

Revert "uses gpt-5, gpt-5 mini"#3444
beastoin merged 1 commit intomainfrom
revert-3443-mz7ba_gpt_5

Conversation

@beastoin
Copy link
Copy Markdown
Collaborator

Reverts #3443

@beastoin beastoin merged commit 390d25d into main Nov 17, 2025
1 check passed
@beastoin beastoin deleted the revert-3443-mz7ba_gpt_5 branch November 17, 2025 06:41
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request reverts the use of gpt-5 and gpt-5-mini models, which is reflected correctly across all the changed files. The model names and associated configurations like max_context_tokens have been updated consistently.

My review focuses on improving maintainability. I've identified instances of code duplication and hardcoded values that could lead to inconsistencies in the future. Specifically:

  • In backend/utils/retrieval/graph.py, there's an opportunity to remove an unused variable and import a model client from the central clients.py file instead of redefining it.
  • In backend/utils/retrieval/agentic.py and backend/utils/retrieval/safety.py, the max_context_tokens value is hardcoded in multiple places. Centralizing this as a constant would be beneficial.

Addressing these points will make the codebase more robust and easier to manage.

# Initialize safety guard
# gpt-5 // 400k
safety_guard = AgentSafetyGuard(max_tool_calls=10, max_context_tokens=400000)
safety_guard = AgentSafetyGuard(max_tool_calls=10, max_context_tokens=500000)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The max_context_tokens value of 500000 is hardcoded here. This value is also used as a default in the AgentSafetyGuard constructor in backend/utils/retrieval/safety.py. To avoid duplication and potential inconsistencies, it would be better to define this as a constant in a shared location (e.g., backend/utils/llm/clients.py) and import it where needed. This ensures that if the agent's model changes, its context window size is updated consistently everywhere.

Comment on lines +45 to +46
model = ChatOpenAI(model="gpt-4o-mini")
llm_medium_stream = ChatOpenAI(model='gpt-4o', streaming=True)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

These model definitions can be improved for better maintainability:

  1. Unused variable: The model variable defined on line 45 is not used anywhere in this file and can be safely removed.
  2. Duplicate definition: llm_medium_stream on line 46 is already defined in backend/utils/llm/clients.py. Instead of redefining it, you should import it from the centralized clients.py file. This avoids code duplication and ensures all model configurations are in a single place.

Please remove these two lines and add the following import at the top of the file with other utils imports:

from utils.llm.clients import llm_medium_stream


# gpt-5 // 400k
def __init__(self, max_tool_calls: int = 10, max_context_tokens: int = 400000):
def __init__(self, max_tool_calls: int = 10, max_context_tokens: int = 500000):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The max_context_tokens is hardcoded here with a default value of 500000. This value is also hardcoded during instantiation in backend/utils/retrieval/agentic.py. This duplication can lead to inconsistencies if the context window for the agent model changes in the future. To improve maintainability, consider defining this value as a constant in a central place, like backend/utils/llm/clients.py where the agent model is defined, and then import and use that constant in both places.

Glucksberg pushed a commit to Glucksberg/omi-local that referenced this pull request Apr 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant