Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: Loading a model via CLI ignores "n_gpu_layers" parameter in config preset #6

Open
Propheticus opened this issue May 2, 2024 · 1 comment

Comments

@Propheticus
Copy link

I have set "n_gpu_layers": -1, in the preset I've selected as default for a model.
However when I use the cli to load that model lms load --identifier llama3-8b-8k >> select model "Meta-Llama-3-8B-Instruct-Q8_0.gguf" > enter, the # GPU layers used is 10.
Loading with flag --gpu max is not a problem, but not knowing which of the config items are used and which are being ignored is.
(Can't tell from the logs with which params the model is loaded)

@ryan-the-crayon
Copy link
Contributor

Thanks for the report, will investigate this soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants