-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
feat(llama.cpp): consolidate options and respect tokenizer template when enabled #7120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
✅ Deploy Preview for localai ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
8c47ffa to
e66ea6e
Compare
e66ea6e to
607fd99
Compare
5e8e57b to
ffe819c
Compare
This allows to configure everything in the YAML file of the model rather than have global configurations Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
…ating system to process messages Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
290063e to
9249f4f
Compare
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
9249f4f to
234c2ae
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Fixes: #7115
Fixes: #6117
This PR aims at two things:
llama.cppbackend to respect the use chat tokenizer setting which is already part of the YAML config of the model. This instructs LocalAI to piggyback to llama.cpp for templating, leaving inline templates as an options as well, but not strictly needed anymore.This allows for instance a YAML config to be only like:
Which internally would automatically render as:
moves some of the options that were passed by env as
options. this allows to configure everything in the model YAML file and avoids generic envs for all loaded models.use_jinja/jinja: Enable Jinja2 template processingcontext_shift: Enable dynamic context window adjustmentcache_ram: Set KV cache RAM limit (in MiB)parallel/n_parallel: Enable parallel request processing with continuous batchinggrpc_servers/rpc_servers: Configure distributed inference across multiple workersExample: