You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are projects like Helicone which provide a wrapper around the OpenAI API for tracking prompts/response quality over time, error rates, understanding pricing and token usage etc. Additionally some people might implement OpenAI API-compatible open source model servers e.g. based on LLama etc. So it would be great if the user could change the baseURL in the settings to use other services.
From the Helicone docs, the implementation should be pretty easy:
Great idea. Tried this change, and while the app works, nothing is displayed on the Helicone dashboard.
I've asked the Helicone discord if there are issues with the /v1/chat/completions API, or with stream: true (for getting the streaming output).
There are projects like Helicone which provide a wrapper around the OpenAI API for tracking prompts/response quality over time, error rates, understanding pricing and token usage etc. Additionally some people might implement OpenAI API-compatible open source model servers e.g. based on LLama etc. So it would be great if the user could change the baseURL in the settings to use other services.
From the Helicone docs, the implementation should be pretty easy:
The text was updated successfully, but these errors were encountered: