Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why use helicone? #12

Open
krrishdholakia opened this issue Sep 23, 2023 · 4 comments
Open

Why use helicone? #12

krrishdholakia opened this issue Sep 23, 2023 · 4 comments

Comments

@krrishdholakia
Copy link

litellm.api_base = "https://oai.hconeai.com/v1"

Hey @nsbradford,

I saw you're logging responses to promptlayer but also using helicone. Curious - why?

If it's for caching - is there something in our implementation you think is missing - https://docs.litellm.ai/docs/caching/

@nsbradford
Copy link
Owner

AFAIK LiteLLM does not support a managed caching solution, the docs only mention self-managed Redis.

@krrishdholakia
Copy link
Author

Tracking this here - BerriAI/litellm#432

Do you want us to provide a hosted caching solution? @nsbradford

@nsbradford
Copy link
Owner

TBH, not a top priority - writing my own cache also only takes <1 hour and then I don't have to worry about whether the 3rd-party caching middleware is reliable, which is why in practice I tend to write my own cache on larger projects

@nsbradford
Copy link
Owner

(would be open to it if implemented, though.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants