Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The 'systemMessage' is not applied to built-in commands #1077

Open
3 tasks done
Tracked by #1009
ajalexander opened this issue Apr 4, 2024 · 1 comment
Open
3 tasks done
Tracked by #1009

The 'systemMessage' is not applied to built-in commands #1077

ajalexander opened this issue Apr 4, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@ajalexander
Copy link
Contributor

ajalexander commented Apr 4, 2024

Before submitting your bug report

Relevant environment info

- OS: macOS 14.4.1
- Continue: v0.9
- IDE: VSCode 1.87.2

Description

The systemMessage property is not being applied to built-in commands (/edit, /comment, etc.) when using models with the provider of openai. This issue assumes that including the systemMessage in those cases is desired/expected.

The systemMessage is being included with general queries in the sidebar.

In my particular case, I'm using a VLLM server rather than OpenAI directly, but the issue should be the same regardless.

To reproduce

There is nothing directly in the Continue output window that will show the issue. For each of the "Look at the messages sent" steps in the write-up below, you can either:

  • Be running the extension in a debugger with a breakpoint set at this point in the OpenAI LLM class and examining body.messages.
  • Look at the messages received by the LLM server (possible in my case since I can look at the VLLM logs)

To reproduce:

  1. Setup the configuration to have a model with provider set to openai
  2. Include a systemMessage for that model
  3. Open the Continue output window
  4. Ask a question using the sidebar
  5. Look at the messages sent to see if the systemMessage is present (working)
  6. Select some code
  7. Invoke the /comment command
  8. Look at the messages sent to see if the systemMessage is present (not working)
  9. Select some code
  10. Invoke the /edit command
  11. Look at the messages sent to see if the systemMessage is present (not working)

Log output

No response

@ajalexander ajalexander added the bug Something isn't working label Apr 4, 2024
@ajalexander
Copy link
Contributor Author

I've tested a change for this locally using the following:

diff --git a/core/llm/llms/OpenAI.ts b/core/llm/llms/OpenAI.ts
index da31df62..dacd16c8 100644
--- a/core/llm/llms/OpenAI.ts
+++ b/core/llm/llms/OpenAI.ts
@@ -122,8 +122,12 @@ class OpenAI extends BaseLLM {
     prompt: string,
     options: CompletionOptions,
   ): AsyncGenerator<string> {
+    const messages: ChatMessage[] = [{ role: "user", content: prompt }];
+    if (this.systemMessage && this.systemMessage.trim().length !== 0) {
+      messages.unshift({ role: "system", content: this.systemMessage });
+    }
     for await (const chunk of this._streamChat(
-      [{ role: "user", content: prompt }],
+      messages,
       options,
     )) {
       yield stripImages(chunk.content);

This changes the behavior to what I'd expect for my use case. However, I don't know whether a similar change should apply in the _complete function as well (I haven't tracked down which code pathway(s) use _complete instead of _streamComplete.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant