Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting error from AzureOpenAI endpoint: logprobs, best_of and echo parameters are not available on gpt-35-turbo model #1564

Closed
hario90 opened this issue Jun 20, 2023 · 5 comments

Comments

@hario90
Copy link
Contributor

hario90 commented Jun 20, 2023

Describe the bug
Getting 400 HTTP status code response from AzureOpenAI when serviceId and deploymentOrModelId are set to "gpt-35-turbo".


fail: Microsoft.SemanticKernel.IKernel[0]
      Something went wrong while rendering the semantic function or while executing the text completion. Function: FunSkill.Joke. Error: Invalid request: The request is not valid, HTTP status: 400. Details: logprobs, best_of and echo parameters are not available on gpt-35-turbo model. Please remove the parameter and try again. For more details, see https://go.microsoft.com/fwlink/?linkid=2227346.
      Status: 400 (BadRequest)
      ErrorCode: BadRequest

      Content:
      {"error":{"code":"BadRequest","message":"logprobs, best_of and echo parameters are not available on gpt-35-turbo model. Please remove the parameter and try again. For more details, see https://go.microsoft.com/fwlink/?linkid=2227346."}}

To Reproduce
Steps to reproduce the behavior:

  1. Clone https://github.com/microsoft/semantic-kernel-starters/tree/main/sk-csharp-console-chat
  2. In terminal, change directory to sk-csharp-hello-world
  3. follow README.md instructions in this directory to set dotnet secrets for serviceId and deploymentOrModelId to "gpt-35-turbo"
  4. In terminal, run
dotnet build
dotnet run

Expected behavior
Console app runs without errors.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: Windows
  • IDE: N/A
  • NuGet Package Version: 0.15.230531.5-preview and 0.16.230615.1-preview (reproducible in both)

Additional context
Add any other context about the problem here.

@hario90
Copy link
Contributor Author

hario90 commented Jun 20, 2023

Closing since I was using a ChatCompletion model when the EndpointType in KernelSettings was set to TextCompletion.

@hario90 hario90 closed this as completed Jun 20, 2023
@ratdoux
Copy link

ratdoux commented Jul 28, 2023

Arghh thank you!!!

@CalvinQuark
Copy link

CalvinQuark commented Aug 23, 2023

I encountered this issue while intentionally targeting the gpt-35-turbo model. This model is not one indicated in the Hello World example's README file which instead specifies that the text-davinci-003 model is to be targeted.

When I created a new Azure OpenAI resource, I requested the full gamut of capabilities in the approval request step. Nevertheless, the text-davinci-003 model is not deployable in my Azure AI Studio instance. The Studio portal indicates "No" in the Deployable column and provides a tooltip stating "This model must be fine-tuned before it can be deployed". But the Studio doesn't appear to either include directly or link to instructions regarding how the model can be fine-tuned and therefore deployed. Hence, the onboarding experience is a bit confusing to the uninitiated.

@hario90
Copy link
Contributor Author

hario90 commented Aug 30, 2023

@CalvinQuark That's a good point, I will add a troubleshooting guide for this error and some information on how to update the endpoint type to the README.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

3 participants