Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Support for Running Your Own HF Pipeline Locally #354

Closed
sam-h-bean opened this issue Dec 16, 2022 · 0 comments
Closed

Add Support for Running Your Own HF Pipeline Locally #354

sam-h-bean opened this issue Dec 16, 2022 · 0 comments

Comments

@sam-h-bean
Copy link
Contributor

Add support for running your own HF pipeline locally. This would allow you to get a lot more dynamic with what HF features and models you support since you wouldn't be beholden to what is hosted in HF hub. You could also do stuff with HF Optimum to quantize your models and stuff to get pretty fast inference even running on a laptop.

Let me know if you want this code and I will clean it up.

hwchase17 pushed a commit that referenced this issue Dec 17, 2022
#354

Add support for running your own HF pipeline locally. This would allow
you to get a lot more dynamic with what HF features and models you
support since you wouldn't be beholden to what is hosted in HF hub. You
could also do stuff with HF Optimum to quantize your models and stuff to
get pretty fast inference even running on a laptop.
mikeknoop pushed a commit to zapier/langchain-nla-util that referenced this issue Mar 9, 2023
langchain-ai/langchain#354

Add support for running your own HF pipeline locally. This would allow
you to get a lot more dynamic with what HF features and models you
support since you wouldn't be beholden to what is hosted in HF hub. You
could also do stuff with HF Optimum to quantize your models and stuff to
get pretty fast inference even running on a laptop.
ZinedineDumas added a commit to ZinedineDumas/React-Python that referenced this issue Jul 17, 2023
langchain-ai/langchain#354

Add support for running your own HF pipeline locally. This would allow
you to get a lot more dynamic with what HF features and models you
support since you wouldn't be beholden to what is hosted in HF hub. You
could also do stuff with HF Optimum to quantize your models and stuff to
get pretty fast inference even running on a laptop.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants