Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added a step to check early stop with llamacpp generate #861

Closed

Conversation

wang-haoxian
Copy link

Fix #837
Added an error when the llm stopped generation earlier than expected.

In llama-cpp-python this line, the default number is 16 tokens. This prevents us to generate a complete json.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

generate.json() gives ValidationError when run with mistral-7b-instruct-v0.2.Q6_K.gguf
1 participant