-
Notifications
You must be signed in to change notification settings - Fork 579
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLM Pipeline with LiteLLM doesn't work #679
Comments
What happens if you do this: llm = LLM(MODEL_NAME,method="litellm", api_base=api_base) or this: llm = LLM(path=MODEL_NAME,method="litellm", api_base=api_base) |
Hi David, it works fine now. But this isn't mentoined in your Docs. |
Please refer to the LLM pipeline documentation. |
Hmm -> path: model path example: llm: So you have to guess, that in front of the path you have to write that it's from huggingface if you use litellm. One or two sentence more would clear the situation. |
Hi,
i got a very strange behavior with LLM pipeline:
################# this works fine ###########################
import litellm
from litellm import completion
MODEL_NAME = "huggingface/TheBloke/leo-hessianai-70B-chat-GPTQ"
messages = [{"role": "user", "content": "Hey, how's it going?"}] # LiteLLM follows the OpenAI format
api_base = "http://127.0.0.1:8080"
CALLING ENDPOINT
completion(model=MODEL_NAME, messages=messages, api_base=api_base)
->
ModelResponse(id='chatcmpl-ef17314f-086f-426c-857d-532ebbe41c06', choices=[Choices(finish_reason='length', index=0, message=Message(content="<|im_start|>user\nHey, how's it going?<|im_end|>\n<|im_start|>user\nHey, how's it going?<|im_end|>\n<|im_start|>user\nHey, how's it going?<|im_end|>\n<|im_start|>user\nHey, how's it going?<|im_", role='assistant', _logprob=-8.859629760100995))], created=1709116562, model='TheBloke/leo-hessianai-70B-chat-GPTQ', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=20, completion_tokens=78, total_tokens=98), _response_ms=28474.163)
################ this doesn't work ###########################
from txtai.pipeline import LLM
llm = LLM(model=MODEL_NAME,method="litellm", api_base=api_base)
is giving this Error Message:
LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=google/flan-t5-base
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers######################################################
Perhaps someone is able to tell me why this doesn't work.
The text was updated successfully, but these errors were encountered: