You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have set "n_gpu_layers": -1, in the preset I've selected as default for a model.
However when I use the cli to load that model lms load --identifier llama3-8b-8k >> select model "Meta-Llama-3-8B-Instruct-Q8_0.gguf" > enter, the # GPU layers used is 10.
Loading with flag --gpu max is not a problem, but not knowing which of the config items are used and which are being ignored is.
(Can't tell from the logs with which params the model is loaded)
The text was updated successfully, but these errors were encountered:
I have set
"n_gpu_layers": -1,
in the preset I've selected as default for a model.However when I use the cli to load that model
lms load --identifier llama3-8b-8k
>> select model "Meta-Llama-3-8B-Instruct-Q8_0.gguf" > enter, the # GPU layers used is 10.Loading with flag
--gpu max
is not a problem, but not knowing which of the config items are used and which are being ignored is.(Can't tell from the logs with which params the model is loaded)
The text was updated successfully, but these errors were encountered: