You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Estimating the cost of each LLM call is a handy tool, but if we start supporting non-OpenAI models we might want the ability to do that in a more general way. Specifically, GooseAI (adding in #30) doesn't return a count of tokens used, so we would need to know how to count the tokens given the prompt and completion (thus we'd need to know the tokenizer that the model used). The math is also different from the math for OpenAI, so if we wanted to improve / expand this feature it might make sense to have LLM provider-specific functions that can calculate the cost. We could also find a heuristic and make sure it's within error tolerance (usually just dividing the character count by a constant).
For now we'll probably just only have cost estimation for OpenAI.
The text was updated successfully, but these errors were encountered:
Estimating the cost of each LLM call is a handy tool, but if we start supporting non-OpenAI models we might want the ability to do that in a more general way. Specifically, GooseAI (adding in #30) doesn't return a count of tokens used, so we would need to know how to count the tokens given the prompt and completion (thus we'd need to know the tokenizer that the model used). The math is also different from the math for OpenAI, so if we wanted to improve / expand this feature it might make sense to have LLM provider-specific functions that can calculate the cost. We could also find a heuristic and make sure it's within error tolerance (usually just dividing the character count by a constant).
For now we'll probably just only have cost estimation for OpenAI.
The text was updated successfully, but these errors were encountered: