Skip to content

Misc. bug: graphics-hook.cpp report “Failed to open pipe” and "could not get device address for XXX" #17498

@MaoJianwei

Description

@MaoJianwei

Name and Version

PS F:\llm\llama-b7157-bin-win-vulkan-x64> .\llama-cli.exe --version
load_backend: loaded RPC backend from F:\llm\llama-b7157-bin-win-vulkan-x64\ggml-rpc.dll
[2025-11-26 00:06:44.918][info][10144] [huya-helper.cpp:378#init_log] graphic-hook 64bit log init suceed.
exe:llama-cli.exe, pid:2548
ggml_vulkan: Found 2 Vulkan devices:
ggml_vulkan: 0 = Quadro P620 (NVIDIA) | uma: 0 | fp16: 0 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none
ggml_vulkan: 1 = Intel(R) UHD Graphics 630 (Intel Corporation) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 32768 | int dot: 0 | matrix cores: none
load_backend: loaded Vulkan backend from F:\llm\llama-b7157-bin-win-vulkan-x64\ggml-vulkan.dll
load_backend: loaded CPU backend from F:\llm\llama-b7157-bin-win-vulkan-x64\ggml-cpu-haswell.dll
version: 7157 (583cb8341)
built with clang version 19.1.5 for x86_64-pc-windows-msvc
PS F:\llm\llama-b7157-bin-win-vulkan-x64>

Operating systems

No response

Which llama.cpp modules do you know to be affected?

No response

Command line

PS F:\llm\llama-b7157-bin-win-vulkan-x64> .\llama-server.exe --model ..\Qwen3-1.7B-gguf-Q4_K_M.gguf    --no-mmap --jinja --verbose-prompt --alias Qwen3-4B-Instruct-2507-gguf-Q4_K_M   --host 0.0.0.0 --flash-attn on --cache-type-k q8_0 --cache-type-v q8_0 -ngl 100 --metrics

Problem description & steps to reproduce

[2025-11-26 00:03:54.275][info][13612] [huya-helper.cpp:378#init_log] graphic-hook 64bit log init suceed.
exe:llama-server.exe, pid:13564
ggml_vulkan: Found 2 Vulkan devices:
ggml_vulkan: 0 = Quadro P620 (NVIDIA) | uma: 0 | fp16: 0 | bf16: 0 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none
ggml_vulkan: 1 = Intel(R) UHD Graphics 630 (Intel Corporation) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 32 | shared memory: 32768 | int dot: 0 | matrix cores: none
[2025-11-26 00:03:54.376][info][15840] [graphics-hook.cpp:82#init_pipe] [OBS] Failed to open pipe

[2025-11-26 00:03:54.377][info][15840] [graphics-hook.cpp:474#hlogv] [OBS]graphics-hook.dll loaded against process: llama-server.exe
[2025-11-26 00:03:54.378][info][15840] [graphics-hook.cpp:474#hlogv] [OBS](half life scientist) everything..  seems to be in order

load_tensors: loading model tensors, this can take a while... (mmap = false)
[2025-11-26 00:03:54.613][info][13612] [graphics-hook.cpp:474#hlogv] [OBS]OBS_CreateDevice: could not get device address for vkCreateSwapchainKHR
[2025-11-26 00:03:54.613][info][13612] [graphics-hook.cpp:474#hlogv] [OBS]OBS_CreateDevice: could not get device address for vkDestroySwapchainKHR
[2025-11-26 00:03:54.614][info][13612] [graphics-hook.cpp:474#hlogv] [OBS]OBS_CreateDevice: could not get device address for vkQueuePresentKHR
[2025-11-26 00:03:54.615][info][13612] [graphics-hook.cpp:474#hlogv] [OBS]OBS_CreateDevice: could not get device address for vkGetSwapchainImagesKHR

First Bad Commit

No response

Relevant log output

Metadata

Metadata

Assignees

No one assigned

    Labels

    3rd partyIssue related to a 3rd party projectwontfixThis will not be worked on

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions