We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Another llama.cpp feature that seems to have shrunk the paper-to-implementation pipeline to less than one week!
This allows for a much longer context (assuming you have the (V)RAM for it)
We can probably close out #77 if this is done.
The text was updated successfully, but these errors were encountered:
To do this we only need a new rope_scaling model parameter. Or am i missing something?
rope_scaling
Sorry, something went wrong.
LLukas22
Successfully merging a pull request may close this issue.
Another llama.cpp feature that seems to have shrunk the paper-to-implementation pipeline to less than one week!
This allows for a much longer context (assuming you have the (V)RAM for it)
We can probably close out #77 if this is done.
The text was updated successfully, but these errors were encountered: