You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On Tue, Mar 25, 2025 at 07:24:10PM -0700, Neo Zhang Jianyu wrote:
NeoZhangJianyu left a comment (ggml-org/llama.cpp#12575)
@ky438
I can't see any profile info of this github account.
I see some several comments of different PRs/issues created by this account in same day.
Could you share background of this issue?
--
Reply to this email directly or view it on GitHub:
#12575 (comment)
You are receiving this because you were mentioned.
Message ID: ***@***.***>
Name and Version
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
llama-bench
Command line
Problem description & steps to reproduce
I notice that performance drops drastically, and variance explodes, if two Intel Arc B580 GPUs are used instead of one:
2x GPUs:
1x GPU with
-sm none
Why is this?
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: