Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Man/mix length Call options not working #862

Open
guidoveritone opened this issue May 29, 2024 · 1 comment
Open

Man/mix length Call options not working #862

guidoveritone opened this issue May 29, 2024 · 1 comment

Comments

@guidoveritone
Copy link

guidoveritone commented May 29, 2024

Model: llama3
LangchaingoVersion: v0.1.10

I was trying to use the llms.WithMaxLength and llms.WithMinLength to set some output limits, but seems like the model doesn't respect these options.

callOptions = append(callOptions, llms.WithMaxLength(50), llms.WithMinLength(10))

then I run the model as the following:

contentResponse, err = o.Llm.GenerateContent(ctx, content, callOptions...)
..
	for _, choice := range contentResponse.Choices {
		output += choice.Content
		errors += choice.StopReason
	}
..

But I have responses with lengths higher than the one that I set, I don't know if this is a bug or if I am using the wrong option.

Also noticed that if I use WithMaxTokens(2) it works, but it feels like the model is just cutting off its response since I asked

prompt: "Man is naturally evil or is corrupted by society?"

And the model gave me:

output: "A Classic"

but the problem is, if I increase the MaxTokens value, I get:

output: "A classic debate!\n\nThe idea that man is corrupted by society, also known as the "social corruption" or "societal influence" theory, suggests that human nature is inherently good and that societal factors, such as culture, norms, and institutions"

@devalexandre
Copy link
Contributor

Model: llama3 LangchaingoVersion: v0.1.10

I was trying to use the llms.WithMaxLength and llms.WithMinLength to set some output limits, but seems like the model doesn't respect these options.

callOptions = append(callOptions, llms.WithMaxLength(50), llms.WithMinLength(10))

then I run the model as the following:

contentResponse, err = o.Llm.GenerateContent(ctx, content, callOptions...)
..
	for _, choice := range contentResponse.Choices {
		output += choice.Content
		errors += choice.StopReason
	}
..

But I have responses with lengths higher than the one that I set, I don't know if this is a bug or if I am using the wrong option.

Also noticed that if I use WithMaxTokens(2) it works, but it feels like the model is just cutting off its response since I asked

prompt: "Man is naturally evil or is corrupted by society?"

And the model gave me:

output: "A Classic"

but the problem is, if I increase the MaxTokens value, I get:

output: "A classic debate!\n\nThe idea that man is corrupted by society, also known as the "social corruption" or "societal influence" theory, suggests that human nature is inherently good and that societal factors, such as culture, norms, and institutions"

ollama haven't supporte to WithMaxLength, only WithMaxTokens

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants