-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
huggingface text-generation-inference support? #89
Comments
@collindutter this would be very useful as most productions have their own TGI/vLLM deployments and I guess they are pretty similar in principal (same as OpenAI and other endpoints). |
@maziyarpanahi roger that! I will take a look at implementing this today! |
@maziyarpanahi it looks like you can use TGI through the Inference Client which is already a part of |
This is great! I have TGI endpoints up and running. I can test any example you provide and share the results. |
Fantastic, thank you for reporting back! I will add a note to the the docs indicating that |
We're standing up a local hf text gen service ( it's pretty good )
https://github.com/huggingface/text-generation-inference
Griptape already has HuggingFaceHubPromptDriver, HuggingFacePipelinePromptDriver, could you add one that can interface with locally hosted huggingface text-generation-inference?
Here's the alternative
https://python.langchain.com/docs/modules/model_io/models/llms/integrations/huggingface_textgen_inference
Thank you for developing griptape! It looks like it's gonna be awesome!
The text was updated successfully, but these errors were encountered: