Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LiteLLM Example in mistral docs wrong #3024

Open
ishaan-jaff opened this issue Feb 26, 2024 · 14 comments
Open

LiteLLM Example in mistral docs wrong #3024

ishaan-jaff opened this issue Feb 26, 2024 · 14 comments
Labels

Comments

@ishaan-jaff
Copy link

Operating System

MacOS

Version Information

not relevant

Steps to reproduce

https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/litellm.ipynb
@santiagxf Thanks for showing an example with litellm, but the docs are wrong

Here's how to use litellm with mistral
https://docs.litellm.ai/docs/providers/mistral

from litellm import completion
import os

os.environ['MISTRAL_API_KEY'] = ""
response = completion(
    model="mistral/mistral-tiny", 
    api_base="your-api-base",
    messages=[
       {"role": "user", "content": "hello from litellm"}
   ],
)
print(response)

Expected behavior

Actual behavior

Addition information

No response

@santiagxf
Copy link
Member

santiagxf commented Feb 26, 2024

@ishaan-jaff, the example is correct. You are sharing an example about how to use Mistral's inference platform, but this is Azure AI. Does it make sense?

@ishaan-jaff
Copy link
Author

ishaan-jaff commented Feb 26, 2024

any reason why you could not use it like this ? @santiagxf ? (i'm the maintainer of litellm)

This looks a lot easier to me and it can go to to an Azure AI endpoint

from litellm import completion
import os

os.environ['MISTRAL_API_KEY'] = ""
response = completion(
    model="mistral/mistral-tiny", 
    api_base="your-api-base",
    messages=[
       {"role": "user", "content": "hello from litellm"}
   ],
)
print(response)

@ishaan-jaff
Copy link
Author

ishaan-jaff commented Feb 26, 2024

Hi @santiagxf I just deployed on Azure AI studio and I was able to run inference with this code

If possible can we update the python notebook with the following code ? It uses the standard format on litellm docs: https://docs.litellm.ai/docs/providers/azure_ai

Happy to make a PR for this too

from litellm import completion
import os

response = completion(
    model="mistral/Mistral-large-dfgfj", 
    api_base="https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1",
    api_key = "JGbKodRcTp****"
    messages=[
       {"role": "user", "content": "hello from litellm"}
   ],
)
print(response)

@santiagxf
Copy link
Member

We tried your example, but it looks the api_base is hardcorded somewhere. Hence you get an access denied error because the token is going to the wrong API. You will get this error (I got this by setting litellm.set_verbose=True).

POST Request Sent from LiteLLM:
curl -X POST \
https://api.mistral.ai/v1/ \
-d '{'model': 'Mistral-large-dfgfj', 'messages': [{'role': 'user', 'content': 'hello from litellm'}], 'extra_body': {}}'

Is this something you can fix? @ishaan-jaff

@ishaan-jaff
Copy link
Author

Yes we fixed this today: BerriAI/litellm#2216 @santiagxf

Thanks for raising this

@santiagxf
Copy link
Member

I just had a look and notice that it requires v1 at the end of the URL. mistral-python, the official client, doesn't require it. I think we should fix that to work in the same way.

https://github.com/BerriAI/litellm/blob/e56cc26e18b83aa39617b7d2cf8ce3c1e3dfee87/litellm/utils.py#L4931

Also, I spotted this line:

https://github.com/BerriAI/litellm/blob/e56cc26e18b83aa39617b7d2cf8ce3c1e3dfee87/litellm/utils.py#L4932

Does it means api_key is being ignored if passed as argument?

@ishaan-jaff
Copy link
Author

Does it means api_key is being ignored if passed as argument?

Nope, that line ensure we use MISTRAL_API_KEY from the env

think we should fix that to work in the same way.

Do you mean litellm the package should append a /v1 to Azure AI studio api_base if the user does not add it ?

@ishaan-jaff
Copy link
Author

Does it means api_key is being ignored if passed as argument?

I think you're right about this actually, going to take a look at this

Summarizing next steps from litellm

  • Stop requiring /v1 for mistral azure ai studio
  • ensure api key is not ignored when passed for mistral/

Does this sound good @santiagxf ? Should I make a PR to this repo once this is fixed

@santiagxf
Copy link
Member

It sounds good to me. The URL thing is not a big deal, but I think it would be nice to use the same approach the official client uses.

Thanks for taking care of this! Once the new version of the library is out we can update this doc. Feel free to use this issue. I will keep it opened so I can remember to do it.

@hsleiman1
Copy link

hsleiman1 commented Feb 28, 2024

Hi,
Why do we need to mention the api_base and api_key, which are not needed for the other models? It would be great if they are also read from the env variables.
Thank you!

@ishaan-jaff
Copy link
Author

@hsleiman1 noted - will add this as an improvement too

@ishaan-jaff
Copy link
Author

Tracking this here @hsleiman1 BerriAI/litellm#2237

@ishaan-jaff
Copy link
Author

Fixes are here : BerriAI/litellm#2247 @hsleiman1 @santiagxf, will update once a new release is out

@santiagxf
Copy link
Member

@ishaan-jaff we still see the same issues happening. I added the comment in the issue you created on your repo:

BerriAI/litellm#2237 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants