-
Notifications
You must be signed in to change notification settings - Fork 3.8k
fix: jetbrains cleanup and logging, reduce ollama api/show calls #8840
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
✅ Review Complete Code Review Summary |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3 issues found across 4 files
Prompt for AI agents (all 3 issues)
Understand the root cause of the following 3 issues and fix them.
<file name="core/llm/llms/Ollama.ts">
<violation number="1" location="core/llm/llms/Ollama.ts:175">
Lazy-loading the model info leaves `supportsFim()` false for the first request, so the initial autocomplete call always skips the FIM path and ignores the suffix context for FIM-capable models.</violation>
</file>
<file name="binary/src/TcpMessenger.ts">
<violation number="1" location="binary/src/TcpMessenger.ts:127">
Dropping the caught error object from the console.error call removes the stack trace and error message, making JSON parsing failures much harder to debug.</violation>
</file>
<file name="binary/src/IpcMessenger.ts">
<violation number="1" location="binary/src/IpcMessenger.ts:91">
Dropping the caught error from the log removes stack/context information, making JSON parse failures hard to diagnose. Please log the error object along with the truncated line.</violation>
</file>
Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR
| * This is called on first use rather than in the constructor to avoid | ||
| * making HTTP requests when models are just being instantiated for config serialization. | ||
| */ | ||
| private async ensureModelInfo(): Promise<void> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lazy-loading the model info leaves supportsFim() false for the first request, so the initial autocomplete call always skips the FIM path and ignores the suffix context for FIM-capable models.
Prompt for AI agents
Address the following comment on core/llm/llms/Ollama.ts at line 175:
<comment>Lazy-loading the model info leaves `supportsFim()` false for the first request, so the initial autocomplete call always skips the FIM path and ignores the suffix context for FIM-capable models.</comment>
<file context>
@@ -161,47 +161,61 @@ class Ollama extends BaseLLM implements ModelInstaller {
+ * This is called on first use rather than in the constructor to avoid
+ * making HTTP requests when models are just being instantiated for config serialization.
+ */
+ private async ensureModelInfo(): Promise<void> {
+ if (this.model === "AUTODETECT") {
return;
</file context>
| line.substring(0, 100) + "..." + line.substring(line.length - 100); | ||
| } | ||
| console.error("Error parsing line: ", truncatedLine, e); | ||
| console.error("Error parsing JSON from line: ", truncatedLine); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dropping the caught error object from the console.error call removes the stack trace and error message, making JSON parsing failures much harder to debug.
Prompt for AI agents
Address the following comment on binary/src/TcpMessenger.ts at line 127:
<comment>Dropping the caught error object from the console.error call removes the stack trace and error message, making JSON parsing failures much harder to debug.</comment>
<file context>
@@ -124,7 +124,7 @@ export class TcpMessenger<
line.substring(0, 100) + "..." + line.substring(line.length - 100);
}
- console.error("Error parsing line: ", truncatedLine, e);
+ console.error("Error parsing JSON from line: ", truncatedLine);
return;
}
</file context>
| console.error("Error parsing JSON from line: ", truncatedLine); | |
| console.error("Error parsing JSON from line: ", truncatedLine, e); |
| line.substring(0, 100) + "..." + line.substring(line.length - 100); | ||
| } | ||
| console.error("Error parsing line: ", truncatedLine, e); | ||
| console.error("Error parsing JSON from line: ", truncatedLine); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dropping the caught error from the log removes stack/context information, making JSON parse failures hard to diagnose. Please log the error object along with the truncated line.
Prompt for AI agents
Address the following comment on binary/src/IpcMessenger.ts at line 91:
<comment>Dropping the caught error from the log removes stack/context information, making JSON parse failures hard to diagnose. Please log the error object along with the truncated line.</comment>
<file context>
@@ -88,7 +88,7 @@ class IPCMessengerBase<
line.substring(0, 100) + "..." + line.substring(line.length - 100);
}
- console.error("Error parsing line: ", truncatedLine, e);
+ console.error("Error parsing JSON from line: ", truncatedLine);
return;
}
</file context>
| console.error("Error parsing JSON from line: ", truncatedLine); | |
| console.error("Error parsing JSON from line: ", truncatedLine, e); |
Description
Summary by cubic
Lazily fetch and cache Ollama model info to cut repeated api/show calls. Cleaned up JetBrains startup and logging to reduce noise.
Refactors
Bug Fixes
Written for commit 685acbc. Summary will update automatically on new commits.