-
Notifications
You must be signed in to change notification settings - Fork 11.9k
Issues: ggml-org/llama.cpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Eval bug: Llama 4 Scout/Maverick crash when processing images with certain aspect ratio
bug-unconfirmed
#13827
opened May 27, 2025 by
hjc4869
Eval bug: Uncaught exception [json.exception.parse_error.101] during tool use crashes llama-server
bug-unconfirmed
#13825
opened May 27, 2025 by
bjodah
Eval bug: seed seems to be locked to a single value 4294967295
bug-unconfirmed
#13823
opened May 27, 2025 by
drazdra
Eval bug: uncaught std::runtime_exception thrown in llama-server during tool use
bug-unconfirmed
#13812
opened May 26, 2025 by
bjodah
ERROR:hf-to-gguf:Model MllamaForConditionalGeneration is not supported
#13805
opened May 26, 2025 by
andreasspap
Compile bug: Vulkan Build Fails in Termux/Proot Due to Missing Cooperative Matrix Shader Variables
bug-unconfirmed
#13801
opened May 26, 2025 by
Manamama
Misc. bug: Streaming tool calls does not return "type": "function", unlike non-stream
bug-unconfirmed
#13798
opened May 26, 2025 by
GinkREAL
SYCL fails to initialize unless iGPU is disabled (Intel Arc A770 + i5-9500)
#13775
opened May 25, 2025 by
deadpipe
Misc. bug: Decreased success rate for tool calling
bug-unconfirmed
#13769
opened May 25, 2025 by
jean-rl
Misc. bug: llama-cli.exe stopped working on Windows Server 10
bug-unconfirmed
#13767
opened May 25, 2025 by
ajhwb
Eval bug: stream with tool_call fix in b5478 crash in container and issues with calls from apps
bug-unconfirmed
#13766
opened May 25, 2025 by
jax0m
Misc. bug: vulkan prompt processing suddenly slows down once I reach a certain prompt size
#13765
opened May 25, 2025 by
netrunnereve
Feature Request: video support in mtmd-cli / server
enhancement
New feature or request
#13754
opened May 24, 2025 by
jacekpoplawski
4 tasks done
Feature Request: Add keep_alive function for llama-server
enhancement
New feature or request
#13748
opened May 24, 2025 by
ylsdamxssjxxdd
4 tasks done
Feature Request: --swa-extra parameter needed to restore speculative decode function with SWA
enhancement
New feature or request
#13747
opened May 24, 2025 by
steampunque
4 tasks done
Misc. bug: RUNPATH properties are not properly set
bug-unconfirmed
#13740
opened May 24, 2025 by
sunhaitao
Eval bug: Server and mtmd both crashing when starting Ultravox
bug-unconfirmed
#13727
opened May 23, 2025 by
sinand99
Unable to deploy the fine-tuned qwen2.5-vl-7b using llama.cpp.
#13723
opened May 23, 2025 by
songzhaohui12
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.