Hi,
I downloaded 'gemma-3-270-m-q4_k_m.gguf' from hugging face and attempted to bundle it via project assets and it correctly pics it up and loads it but fails to load the file.
The error reported is:
failed to load model from /data/user/0/com.jegly.offlineLLM/files/models/gemma-3-270-m-q4_k_m.gguf
The same error occurs when loaded via the file import.
Which occurs in llama_model_load_from_file_impl
I can see log references in this c code but I'm not sure where it logs to? as it isn't output via logcat.
So I guess I'd like to know:
- How could I view the LLAMA_LOG_ERROR outputs?
- What could cause the loadModel() to fail?
Hi,
I downloaded 'gemma-3-270-m-q4_k_m.gguf' from hugging face and attempted to bundle it via project assets and it correctly pics it up and loads it but fails to load the file.
The error reported is:
failed to load model from /data/user/0/com.jegly.offlineLLM/files/models/gemma-3-270-m-q4_k_m.ggufThe same error occurs when loaded via the file import.
Which occurs in
llama_model_load_from_file_implI can see log references in this c code but I'm not sure where it logs to? as it isn't output via logcat.
So I guess I'd like to know: