Replies: 2 comments
-
@dkindlund did you manage to figure this out? I'm trying to make use of the LiteLLM prompt management feature, where you can pass the prompt_id and prompt_parameters to liteLLM, and it'll proxy it through to Langfuse. I tried the baseOptions JSON but that just crashes it completely - get connection failture. |
Beta Was this translation helpful? Give feedback.
0 replies
-
it can serve as inspiration but I can make work openwebui to litellm using this filter https://github.com/thiswillbeyourgithub/openwebui_custom_pipes_filters/blob/main/filters/add_metadata.py |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey @HenryHengZJ , in Flowise, I have an existing chatflow defined and it's connected to a LiteLLM proxy using the
ChatOpenAI
chat model node.When I call the predictions API, I'm looking to try and include metadata that can get sent to the LiteLLM proxy -- something like this:
And when Flowise sends the completion API call over to the LiteLLM proxy, it would sort of look like:
My questions are:
overrideConfig
parameter to use?The reason why I'm looking for this, is because I not only have Langfuse connected in Flowise, but I also have Langfuse connected in LiteLLM, and I'm looking for an easy way to track the same Trace across both sources.
Ref: https://docs.litellm.ai/docs/proxy/logging#logging-metadata-to-langfuse
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions