-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Revert "uses gpt-5, gpt-5 mini" #3444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -42,8 +42,8 @@ | |
| from utils.app_integrations import get_github_docs_content | ||
| from utils.retrieval.agentic import execute_agentic_chat_stream | ||
|
|
||
| model = ChatOpenAI(model="gpt-5-mini") | ||
| llm_medium_stream = ChatOpenAI(model='gpt-5', streaming=True) | ||
| model = ChatOpenAI(model="gpt-4o-mini") | ||
| llm_medium_stream = ChatOpenAI(model='gpt-4o', streaming=True) | ||
|
Comment on lines
+45
to
+46
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These model definitions can be improved for better maintainability:
Please remove these two lines and add the following import at the top of the file with other from utils.llm.clients import llm_medium_stream |
||
|
|
||
|
|
||
| class StructuredFilters(TypedDict): | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -23,8 +23,7 @@ class AgentSafetyGuard: | |
| - Suspicious parameter patterns | ||
| """ | ||
|
|
||
| # gpt-5 // 400k | ||
| def __init__(self, max_tool_calls: int = 10, max_context_tokens: int = 400000): | ||
| def __init__(self, max_tool_calls: int = 10, max_context_tokens: int = 500000): | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The |
||
| self.max_tool_calls = max_tool_calls | ||
| self.max_context_tokens = max_context_tokens | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
max_context_tokensvalue of500000is hardcoded here. This value is also used as a default in theAgentSafetyGuardconstructor inbackend/utils/retrieval/safety.py. To avoid duplication and potential inconsistencies, it would be better to define this as a constant in a shared location (e.g.,backend/utils/llm/clients.py) and import it where needed. This ensures that if the agent's model changes, its context window size is updated consistently everywhere.