Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLM Pipeline with LiteLLM doesn't work #679

Closed
FBR65 opened this issue Feb 28, 2024 · 4 comments
Closed

LLM Pipeline with LiteLLM doesn't work #679

FBR65 opened this issue Feb 28, 2024 · 4 comments

Comments

@FBR65
Copy link

FBR65 commented Feb 28, 2024

Hi,

i got a very strange behavior with LLM pipeline:

################# this works fine ###########################
import litellm
from litellm import completion

MODEL_NAME = "huggingface/TheBloke/leo-hessianai-70B-chat-GPTQ"
messages = [{"role": "user", "content": "Hey, how's it going?"}] # LiteLLM follows the OpenAI format
api_base = "http://127.0.0.1:8080"

CALLING ENDPOINT

completion(model=MODEL_NAME, messages=messages, api_base=api_base)

->

ModelResponse(id='chatcmpl-ef17314f-086f-426c-857d-532ebbe41c06', choices=[Choices(finish_reason='length', index=0, message=Message(content="<|im_start|>user\nHey, how's it going?<|im_end|>\n<|im_start|>user\nHey, how's it going?<|im_end|>\n<|im_start|>user\nHey, how's it going?<|im_end|>\n<|im_start|>user\nHey, how's it going?<|im_", role='assistant', _logprob=-8.859629760100995))], created=1709116562, model='TheBloke/leo-hessianai-70B-chat-GPTQ', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=20, completion_tokens=78, total_tokens=98), _response_ms=28474.163)

################ this doesn't work ###########################

from txtai.pipeline import LLM

llm = LLM(model=MODEL_NAME,method="litellm", api_base=api_base)

is giving this Error Message:

LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=google/flan-t5-base
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

######################################################

Perhaps someone is able to tell me why this doesn't work.

@davidmezzetti
Copy link
Member

What happens if you do this:

llm = LLM(MODEL_NAME,method="litellm", api_base=api_base)

or this:

llm = LLM(path=MODEL_NAME,method="litellm", api_base=api_base)

@FBR65
Copy link
Author

FBR65 commented Feb 29, 2024

Hi David,

it works fine now. But this isn't mentoined in your Docs.

@FBR65 FBR65 closed this as completed Feb 29, 2024
@davidmezzetti
Copy link
Member

Please refer to the LLM pipeline documentation.

@FBR65
Copy link
Author

FBR65 commented Mar 1, 2024

Hmm ->

path: model path

example:

llm:
path: Open-Orca/Mistral-7B-OpenOrca
torch_dtype: torch.bfloat16

So you have to guess, that in front of the path you have to write that it's from huggingface if you use litellm.

One or two sentence more would clear the situation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants