-
Notifications
You must be signed in to change notification settings - Fork 150
Description
Hi,
Q1: is there a way to pass custom options directly to the llm? For example, I'd like to pass verbosity and reasoning_effort for got-5 family via azure openai. Couldn't make it work, and setting reasoningEffort in forwardOptions.modelConfig didn't appear in request body logger output.
Q2: Do you support openai's responses api? If not, do you plan to maintain various provider integrations or there's perhaps an easier way to get around different quirks?
Q3: Is it possible to track/log cached_tokens, reasoning_tokens in responses or only standard usage?
Q4: Why is stream field exposed in both modelConfig and forwardOptions directly whilst also possible to use it via streamingForward?
Q5: How can I log specific params per http request, such as top_p, etc? Simple logger shows request body with prompt messages only, i don't see params there.
Thanks