Skip to content

Conversation

@arthw
Copy link
Contributor

@arthw arthw commented Jan 21, 2026

Fix the issue: #9241.

The host_buffer is used to create the copy of memory on devices.
The original code use the malloc_host() which bind with queue/devices.
When copy the memory from another device which has different context of malloc_host, the memcpy() is forbidden:
SYCL need both queues' context are same when copy memory between devices.
It's same for the memory created by malloc_host().

In the issue case, two dGPU belong to different context, the malloc_host() will be executed on the queue of second GPU.
When set the -ngl 48 (total 49), during load model, the tensor of first GPU will be copied to host_buffer based on queue of second GPU. It will trigger the error of L0.

To support iGPU+dGPU, we can't resolve it by add both GPUs' queue to same context. (iGPU and dGPU are different family and can't belong to same context).

So, we use the malloc() to replace the malloc_host() to support more multiple GPUs cases.

It won't impact the performance.

@arthw arthw mentioned this pull request Jan 21, 2026
@github-actions github-actions bot added ggml changes relating to the ggml tensor library for machine learning SYCL https://en.wikipedia.org/wiki/SYCL - GPU programming language labels Jan 21, 2026
@NeoZhangJianyu NeoZhangJianyu merged commit cb6caca into ggml-org:master Jan 23, 2026
147 of 149 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ggml changes relating to the ggml tensor library for machine learning SYCL https://en.wikipedia.org/wiki/SYCL - GPU programming language

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants