You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But I have responses with lengths higher than the one that I set, I don't know if this is a bug or if I am using the wrong option.
Also noticed that if I use WithMaxTokens(2) it works, but it feels like the model is just cutting off its response since I asked
prompt: "Man is naturally evil or is corrupted by society?"
And the model gave me:
output: "A Classic"
but the problem is, if I increase the MaxTokens value, I get:
output: "A classic debate!\n\nThe idea that man is corrupted by society, also known as the "social corruption" or "societal influence" theory, suggests that human nature is inherently good and that societal factors, such as culture, norms, and institutions"
The text was updated successfully, but these errors were encountered:
But I have responses with lengths higher than the one that I set, I don't know if this is a bug or if I am using the wrong option.
Also noticed that if I use WithMaxTokens(2) it works, but it feels like the model is just cutting off its response since I asked
prompt: "Man is naturally evil or is corrupted by society?"
And the model gave me:
output: "A Classic"
but the problem is, if I increase the MaxTokens value, I get:
output: "A classic debate!\n\nThe idea that man is corrupted by society, also known as the "social corruption" or "societal influence" theory, suggests that human nature is inherently good and that societal factors, such as culture, norms, and institutions"
ollama haven't supporte to WithMaxLength, only WithMaxTokens
Model:
llama3
LangchaingoVersion:
v0.1.10
I was trying to use the
llms.WithMaxLength
andllms.WithMinLength
to set some output limits, but seems like the model doesn't respect these options.then I run the model as the following:
But I have responses with lengths higher than the one that I set, I don't know if this is a bug or if I am using the wrong option.
Also noticed that if I use
WithMaxTokens(2)
it works, but it feels like the model is just cutting off its response since I askedAnd the model gave me:
but the problem is, if I increase the
MaxTokens
value, I get:The text was updated successfully, but these errors were encountered: