Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarifai-LiteLLM : Added clarifai as LLM Provider. #3369

Merged
merged 3 commits into from
May 11, 2024

Conversation

mogith-pn
Copy link
Contributor

Objectives

  • Added Clarifai as LLM provider.
  • This enables to call LLM models hosted in clarifai platform.

* intg v1 clarifai-litellm

* Added more community models and testcase

* Clarifai-updated markdown docs
Copy link

vercel bot commented Apr 30, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback May 3, 2024 2:04pm

@krrishdholakia
Copy link
Contributor

Thanks for this PR @mogith-pn. Happy to add support.

Blockers

  • async completion
  • async streaming
  • streaming

Once completed, if you can share a screenshot of this passing testing for you - that would be great!

@mogith-pn
Copy link
Contributor Author

mogith-pn commented May 2, 2024

Thanks for this PR @mogith-pn. Happy to add support.

Blockers

  • async completion
  • async streaming
  • streaming

Once completed, if you can share a screenshot of this passing testing for you - that would be great!

Hi @krrishdholakia ,
Thanks for your response. Currently we don't support the streaming and async completion. So should I include a acompletion, stream function and raise NotImplemented error sort of ? or what do you suggest ?

@krrishdholakia
Copy link
Contributor

For async completions - it's just a call with our async http handler

see anthropic -

self.async_handler = AsyncHTTPHandler(

For streaming - if your backend server doesn't support streaming, then make a normal completion/async completion call and wrap it in an iterator -

completion_stream = ModelResponseIterator(

This will make sure people's calls don't break in prod.

@krrishdholakia
Copy link
Contributor

krrishdholakia commented May 2, 2024

I don't think your PR uses the clarifai sdk. If i missed it - can you please use the HTTP endpoints and our httpx clients instead - e.g.

self.async_handler = AsyncHTTPHandler(
.

This will keep the package light, and let people switch between providers easily.

@mogith-pn
Copy link
Contributor Author

mogith-pn commented May 3, 2024

Hi @krrishdholakia ,
Added the above changes.
Created a test case for async completion in test_clarifai_completion.py and for streaming added it in test_streaming.py. Both passed!.
Screenshot 2024-05-03 at 2 34 49 PM
Screenshot 2024-05-03 at 2 35 06 PM

@mogith-pn
Copy link
Contributor Author

Hey @krrishdholakia ,
Hope you are doing good. Kindly take a look at it and let me know if this looks good..!

@krrishdholakia
Copy link
Contributor

Great! planning on merging after we have a stable release out later today 🚀
thanks for the great work @mogith-pn

Curious - how're you using litellm today?

@krrishdholakia krrishdholakia merged commit 8f6ae9a into BerriAI:main May 11, 2024
1 check passed
@mogith-pn
Copy link
Contributor Author

Great! planning on merging after we have a stable release out later today 🚀 thanks for the great work @mogith-pn

Curious - how're you using litellm today?
@krrishdholakia ,
Thanks for the merge. Sorry I missed your question !
We were using it for prototyping and we used direct open AI models. but this was part of our integration goals to integrate clarifai as LLM providers with LiteLLM so our community opensource users can leverage it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants