Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The first time you use the plug-in, after entering apiBaseUrl, enter the key and click to verify, there will be no response. #36

Closed
zzy-life opened this issue May 31, 2023 · 17 comments
Labels
bug Something isn't working

Comments

@zzy-life
Copy link
Contributor

Describe the Bug

The first time you use the plug-in, after entering apiBaseUrl, enter the key and click to verify, there will be no response. (If there is a response, the question will also report an error 401 exception. You need to restart vscode to use it normally.

Where are you running VSCode? (Optional)

None

Which OpenAI model are you using? (Optional)

None

Additional context (Optional)

No response

@zzy-life zzy-life added the bug Something isn't working label May 31, 2023
@Christopher-Hayes
Copy link
Owner

Sorry, missed this issue. I'll investigate.

@Cytranics
Copy link

Chris man can you just remove the entire paste your API and running a status check on the models. Everyone has the same models. DOing this will open the door to allowing azure, but man you've really locked things down for no need.

@Christopher-Hayes
Copy link
Owner

Christopher-Hayes commented Jul 3, 2023

Chris man can you just remove the entire paste your API and running a status check on the models. Everyone has the same models. DOing this will open the door to allowing azure, but man you've really locked things down for no need.

I'll reconsider how the code works.

This was done to show what models could be used. For a while this was needed for gpt-4, still to some degree with gpt-4-32k which is limited access, but exceedingly expensive if actually used.

Ideally, this is fixed without removing model availability functionality. I'm not in a rush to add support for new APIs if it degrades the experience for the bulk of users.

@Cytranics
Copy link

Cytranics commented Jul 3, 2023 via email

@Christopher-Hayes
Copy link
Owner

@Cytranics I can add a bypass button short-term. I think the root issue here is that updates to apiBaseUrl put the extension in a broken state until restarted, which should be fixable.

In reference to the prompt - that's already changeable. OpenAI calls this the "system context" which is what you'll find in the extension settings. I know that's not evident to most users, I might need to update that setting to include the word "prompt".

Btw, that's very generous, I don't do FOSS full-time, so I couldn't accept that in good conscience.

@Christopher-Hayes
Copy link
Owner

For example:
image
image

@zzy-life
Copy link
Contributor Author

zzy-life commented Jul 3, 2023

It should be considered to support the 3.5 16k model, and set the button to switch the context mode.

@Christopher-Hayes
Copy link
Owner

It should be considered to support the 3.5 16k model, and set the button to switch the context mode.

gpt-3.5-turbo-16k is already supported in the latest version of the extension. Having an easier way to change the "system context" is a possibility.

@zzy-life
Copy link
Contributor Author

zzy-life commented Jul 3, 2023

It should be considered to support the 3.5 16k model, and set the button to switch the context mode.

gpt-3.5-turbo-16k is already supported in the latest version of the extension. Having an easier way to change the "system context" is a possibility.

Sorry, didn't notice that 16k is already supported, but switching context mode is also necessary, because sometimes GPT does not need to carry memory.

@Christopher-Hayes
Copy link
Owner

It should be considered to support the 3.5 16k model, and set the button to switch the context mode.

gpt-3.5-turbo-16k is already supported in the latest version of the extension. Having an easier way to change the "system context" is a possibility.

Sorry, didn't notice that 16k is already supported, but switching context mode is also necessary, because sometimes GPT does not need to carry memory.

Could you elaborate on "switching context mode is also necessary, because sometimes GPT does not need to carry memory."? I'm not sure if I'm 100% understanding. What do you mean by "switch context mode"? If you're referring to changing the "system context", as I mentioned above, it's already supported in the extension. I can work on making it easier to find, but it is there in the extension settings under "System Context".

@zzy-life
Copy link
Contributor Author

zzy-life commented Jul 3, 2023

It should be considered to support the 3.5 16k model, and set the button to switch the context mode.

gpt-3.5-turbo-16k is already supported in the latest version of the extension. Having an easier way to change the "system context" is a possibility.

Sorry, didn't notice that 16k is already supported, but switching context mode is also necessary, because sometimes GPT does not need to carry memory.

Could you elaborate on "switching context mode is also necessary, because sometimes GPT does not need to carry memory."? I'm not sure if I'm 100% understanding. What do you mean by "switch context mode"? If you're referring to changing the "system context", as I mentioned above, it's already supported in the extension. I can work on making it easier to find, but it is there in the extension settings under "System Context".

For example, if the context is enabled by default, it will send the entire conversation and consume more tokens. Sometimes I don’t need to send the context record, so I click the button to close it to save tokens.

@Christopher-Hayes
Copy link
Owner

@zzy-life ah I see, yeah, right now there's just the "clear" button, or closing the chat. I'll look at a way to incorporate something into the UI.

@zzy-life
Copy link
Contributor Author

zzy-life commented Jul 3, 2023

@zzy-life ah I see, yeah, right now there's just the "clear" button, or closing the chat. I'll look at a way to incorporate something into the UI.

Because I have to click the clear button every time I use it, it is a little troublesome. If you have time you can think about it, but it's not an urgently needed feature

@Christopher-Hayes
Copy link
Owner

Christopher-Hayes commented Jul 3, 2023

To provide an update, I'll have a fix coming soon for proxy APIs. It will allow API URL setup at the same time as API Key setup. It will also fix issues with updating the api url and having to restart the extension. This will only be a solve for proxies of the OpenAI API.

I looked more into Azure's OpenAI service. This won't be a fix for Azure, Azure does some things differently that make it not quite a drop-in replacement. Since we already have an issue open for Azure API support, I've included more info over there: #28

@Cytranics
Copy link

Cytranics commented Jul 3, 2023 via email

@Christopher-Hayes
Copy link
Owner

If I have time this week I'll do a push with your code for azure. I just spent so much time reverse engineering genie because they closed source it. I added system prompt changing, full azure support. I took a look at your code and it was pretty wildly different. That's why I didn't really get involved. But I got some free time out. Perhaps update yours Sent from Ninehttp://www.9folders.com/

________________________________ From: Chris Hayes @.> Sent: Monday, July 3, 2023 2:39 AM To: Christopher-Hayes/vscode-chatgpt-reborn Cc: Cory Coddington; Mention Subject: Re: [Christopher-Hayes/vscode-chatgpt-reborn] The first time you use the plug-in, after entering apiBaseUrl, enter the key and click to verify, there will be no response. (Issue #36) To provide an update, I'll have a fix coming soon for proxy APIs. It will allow API URL setup at the same time as API Key setup. It will also fix issues with updating the api url and having to restart the extension. This will only be a solve for proxies of the OpenAI API. I looked more into Azure's OpenAI service. This won't be a fix for Azure, Azure does some things differently that make it not quite a drop-in replacement. Since we already have an issue open for Azure API support, but I've included more info over there: #28<#28> — Reply to this email directly, view it on GitHub<#36 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/A5RL5MDTKY2IPHAHHLV5TFDXOJSJ7ANCNFSM6AAAAAAYUXBSDY. You are receiving this because you were mentioned.Message ID: @.>

Sure, I'd appreciate that.

To help you - it seems like for Azure support we'll want to swap the "OpenAI" library in api-provider.ts to use Azure's version, which supports both Azure's modified API and OpenAI's regular API. Azure calls their deployments "engines" and sends that to their API, so introducing an "engine" type to this extension in some way might be needed. In a perfect world, Azure users would see an "engine" dropdown in place of a "model" dropdown.

Some relevant files:

  • api-provider.ts - Logic for talking to OpenAI API.
    • (Ignore chatgpt-api.ts and abstract-chatgpt-api.ts. Those are no longer used, I forgot to remove them.)
  • chagpt-view-provider.ts - Most of the main process logic occurs here. All frontend->backend event messages are handled in here.
  • renderer/layout.tsx - All backend->frontend messages are handled here, those messages often result in app state changes. Frontend state is managed by Redux.
  • renderer/components/ModelSelect.tsx - The model dropdown component.
  • renderer/components/ApiKeySetup.tsx - API key setup page.

@Christopher-Hayes
Copy link
Owner

In the latest release, v3.19.0, it will now give the user a way to set the apiBaseUrl at the setup screen. It should also just allow you to change the apiBaseUrl at any time without needing to restart vscode.

Before I close this issue as completed, @zzy-life can you confirm the bug you saw is now fixed?

In relation to the Azure API discussion here, that will continue in #28. With some modifications + the extension switching to Vercel's AI package, we'll soon be able to support a number of non-OpenAI models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants