Name and Version
b6976
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
llama-server, Other (Please specify in the next section)
Command line
general benchmarking of Qwen3 VL models (CHARTQA bench will show the problem)
Problem description & steps to reproduce
Large performance drop on Qwen3 VL models from range of ~0.8 down to ~0.4 on CHARTQA bench with b6976 changes to mtmd using Qwen 3 VL 8B Instruct
First Bad Commit
b6976
Relevant log output