Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added azure api version patch #786

Conversation

hargunmujral
Copy link

Describe the changes you have made:

  • Azure openai uses an "api version" parameter when running open interpreter. Thus, added this as a param in the setup file
  • Fixed broken link to CONTRIBUTING.md in the README.md

Reference any relevant issue (Fixes #000)

  • I have performed a self-review of my code:

I have tested the code on the following OS:

  • Windows
  • MacOS
  • Linux

AI Language Model (if applicable)

  • GPT4
  • GPT3
  • Llama 7B
  • Llama 13B
  • Llama 34B
  • Huggingface model (Please specify which one)

@Notnaton
Copy link
Collaborator

Should this have a --api_version cli argument?

@KillianLucas
Copy link
Collaborator

Nice @hargunmujral. We should start moving in this direction of exposing more params via the Python package (and via CLI flags as @Notnaton mentioned, which I'll do after merging). Especially with the new config setup that @Notnaton is building, I think this takes us in the right direction. Merging!

@KillianLucas KillianLucas merged commit 8fec1d1 into OpenInterpreter:main Nov 26, 2023
@KillianLucas
Copy link
Collaborator

Also great catch on the CONTRIBUTING link!

@symonh
Copy link

symonh commented Nov 26, 2023

Thanks everyone. I updated interpreter (pip install...) and then export AZURE_API_VERSION=2023-08-01-preview. Now when I run interpreter I get this error:

APIError(status_code=500, message=str(original_exception), llm_provider=custom_llm_provider, model=model)
litellm.exceptions.APIError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/',..)` Learn more: https://docs.litellm.ai/docs/providers

Following the litellm docs, I tried running interpreter --model=azure/nameai-63k-tpm

This loads open interpreter, but when I pass a message to it I get

openai.error.InvalidRequestError: Invalid URL (POST /v1/openai/deployments/nameai-63k-tpm/chat/completions)

My config looks like this:

local: false
temperature: 1
context_window: 31000
max_tokens: 3000
OPENAI_API_KEY: 97-*******************84
API_BASE: https://nameai.openai.azure.com/
API_TYPE: azure
MODEL: azure/nameai-63k-tpm
AZURE_API_VERSION: 2023-08-01-preview

I've played with setting API_BASE to https://nameai.openai.azure.com/openai/deployments/nameai-63k-tpm — but I get the same openai.error.InvalidRequestError: Invalid URL (POST /v1/openai/deployments/nameai-63k-tpm/chat/completions).

Thank you for this project and for any help you can provide — this will be an absolute dream when I get it working!

@Notnaton
Copy link
Collaborator

@symonh use lower case in the config file and try again

@symonh
Copy link

symonh commented Nov 26, 2023

@Notnaton Thank you very much — it's working!

When analyzing data, how good should I expect the performance to be compared to ChatGPT? Roughly on par, minus the limitations from packages and compute, or less capable?

I just asked it to reshape a dataset and it performed beautifully :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants