-
Notifications
You must be signed in to change notification settings - Fork 90
Description
Problem Summary
When using the CodeGPT extension with local Large Language Models (LLMs) served via LM Studio (tested with qwen2.5-coder:14b), several actions initiated from the VS Code right-click context menu either return malformed output (<|im_start|>) or no response at all. In contrast, the equivalent slash commands (/Document, /Refactor, /Fix, /Explain, /Comment, /UnitTest, /Debug) in the CodeGPT chat panel consistently work correctly and provide high-quality output.
Reproducibility
This issue is consistently reproducible.
Environment
- VS Code Version: 1.101.2
- CodeGPT Extension Version: 3.12.107
- Operating System: macOS Sequoia 15.5 (24F74)
- LM Studio Version: 0.3.17
- Local LLM Model:
qwen2.5-coder:14b(specificallyqwen2.5-coder-14b-GGUF/qwen2.5-coder-14b.gguf) - LM Studio Server Settings:
- Context Length: 8096 (Model supports up to 32768 tokens)
- GPU Offload: 48 / 48
- CPU Thread Pool Size: 8
- Evaluation Batch Size: 512
Steps to Reproduce
- Ensure CodeGPT is configured to use LM Studio as its provider and the
qwen2.5-coder:14bmodel is loaded and running in LM Studio. - Create a new Python file (e.g.,
test_code_gpt.py) in VS Code. - Use the following simple
factorialfunction for initial tests:def factorial(n): if n == 0: return 1 else: return n * factorial(n-1)
Test Context Menu Actions (Issue Occurs Here)
-
"CodeGPT: Document":
- Select the
factorialfunction. - Right-click -> CodeGPT ->
CodeGPT: Document. - Expected: A generated docstring appears in the CodeGPT chat panel.
- Actual:
<|im_start|> <|im_start|>or similar malformed output appears in the chat panel.
- Select the
-
"CodeGPT: Refactor":
- Select the
factorialfunction. - Right-click -> CodeGPT ->
CodeGPT: Refactor. - Expected: A refactored version of the code appears in the CodeGPT chat panel.
- Actual: The command appears in the chat, but no response from the model follows. The chat remains ready for next input.
- Select the
-
"CodeGPT: Explain" (for complex code):
- Replace content of
test_code_gpt.pywith:def calculate_circle_area_and_circumference(radius): if radius < 0: print("Error: Radius cannot be negative.") return None, None area = 3.14159 * radius * radius circumference = 2 * 3.14159 * radius print(f"For radius {radius}: Area = {area}, Circumference = {circumference}") return area, circumference
- Select the
calculate_circle_area_and_circumferencefunction. - Right-click -> CodeGPT ->
CodeGPT: Explain. - Expected: An explanation of the code appears.
- Actual:
<|im_start|> <|im_start|>or similar malformed output appears in the chat panel. (Note: For the simplefactorialfunction, this action does work correctly.)
- Replace content of
-
"CodeGPT: Find Problems" (for complex code with issues):
- Replace content of
test_code_gpt.pywith:def divide(a, b): # Deliberate error: missing zero division handling return a / b def greet(name): # Deliberate syntax error: not an f-string print("Hello, " + name + "!") def calculate_average(numbers): total = 0 for num in numbers: total += num # Deliberate logical error: division by list length without checking if list is empty return total / len(numbers)
- Select all three functions.
- Right-click -> CodeGPT ->
CodeGPT: Find Problems. - Expected: An analysis of problems/bugs in the code.
- Actual:
<|im_start|> <|im_start|>or similar malformed output appears in the chat panel.
- Replace content of
Test Slash Commands (Workaround / Correct Behavior)
- For each problematic scenario above (
Document,Refactor,Explain complex,Find Problems), perform the equivalent action using the slash command in the CodeGPT chat panel after selecting the code. - Example (for Document): Select code -> Open CodeGPT Chat -> type
/Document-> Press Enter. - Observed: All slash commands (
/Document,/Refactor,/Explain,/Fix,/UnitTest,/Comment,/Debug) work correctly and provide high-quality, relevant output, regardless of code complexity.
Expected Behavior
All context menu actions (CodeGPT: Document, CodeGPT: Refactor, CodeGPT: Explain, CodeGPT: Find Problems) should function correctly and provide relevant output from the local LLM, similar to how their corresponding slash commands do.
Additional Notes
CodeGPT: Unit Testfrom the context menu does work correctly.- The model itself (
qwen2.5-coder:14b) demonstrates strong capabilities when the prompt is correctly received (via slash commands), suggesting the issue lies in how the context menu actions construct and send the prompt to LM Studio.