Skip to content

Conversation

@zayac
Copy link
Contributor

@zayac zayac commented Nov 12, 2025

Summary

Implements LOG operation for the Vulkan backend with F32 and F16 support.

Part of #14909.

Testing

./build/bin/test-backend-ops -o LOG
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = NVIDIA GeForce RTX 5070 (NVIDIA) | uma: 0 | fp16: 1 | bf16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: NV_coopmat2
Testing 2 devices

Backend 1/2: Vulkan0
  Device description: NVIDIA GeForce RTX 5070
  Device memory: 12227 MB (10998 MB free)

  LOG(type=f16,ne=[10,5,4,3]): OK
  LOG(type=f16,ne=[7,1,5,3]): OK
  LOG(type=f32,ne=[10,5,4,3]): OK
  LOG(type=f32,ne=[7,1,5,3]): OK
  4/4 tests passed
  Backend Vulkan0: OK
Backend 2/2: CPU
  Skipping CPU backend
2/2 backends passed
OK

@zayac zayac requested a review from 0cc4m as a code owner November 12, 2025 00:48
@github-actions github-actions bot added documentation Improvements or additions to documentation Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Nov 12, 2025
@zayac
Copy link
Contributor Author

zayac commented Nov 12, 2025

Thanks, Jeff, for the feedback. I addressed all the comments. PTAL.

@zayac zayac requested a review from jeffbolznv November 12, 2025 21:45
@zayac
Copy link
Contributor Author

zayac commented Nov 14, 2025

Hi @0cc4m, could you please review / approve this PR?

Copy link
Collaborator

@0cc4m 0cc4m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix the shaders-gen issue and rebase to fix the docs/ops conflict. Otherwise this is fine, thank you for the contribution!

@zayac zayac force-pushed the vulkan-log-operation branch 2 times, most recently from d1bcd18 to 0f31caa Compare November 15, 2025 22:10
@0cc4m
Copy link
Collaborator

0cc4m commented Nov 16, 2025

Someone has updated the same line in ops.md, so there's another conflict, sorry. I'll merge once you rebase it again.

@zayac zayac force-pushed the vulkan-log-operation branch from 06c5543 to f7dce2c Compare November 16, 2025 21:33
@zayac zayac force-pushed the vulkan-log-operation branch from f7dce2c to ba6f9c7 Compare November 16, 2025 21:39
@zayac
Copy link
Contributor Author

zayac commented Nov 16, 2025

Someone has updated the same line in ops.md, so there's another conflict, sorry. I'll merge once you rebase it again.

Should be good now. Please merge as fast as possible :)

Copy link
Collaborator

@0cc4m 0cc4m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@0cc4m 0cc4m merged commit dbed612 into ggml-org:master Nov 16, 2025
1 check passed
basnijholt pushed a commit to basnijholt/llama.cpp that referenced this pull request Nov 16, 2025
* vulkan: add LOG operation support for F32 and F16

Part of ggml-org#14909.

* vulkan: Fix LOG operation types

* docs: Update operation support documentation for Vulkan LOG operation

* vulkan: fix log_f16 shader

* docs: restore missing LOG test cases and regenerate ops.md
@CISC
Copy link
Collaborator

CISC commented Nov 17, 2025

@0cc4m
Copy link
Collaborator

0cc4m commented Nov 17, 2025

No, I think that's another RTE issue. It didn't happen on my hardware, but seems persistent on the CI.

@zayac
Copy link
Contributor Author

zayac commented Nov 17, 2025

I can neither reproduce it locally.

Both

./build/bin/test-backend-ops test -o LOG
./build/bin/test-backend-ops test -b Vulkan0

succeed for me.

@0cc4m
Copy link
Collaborator

0cc4m commented Nov 17, 2025

Let's see if #17320 fixes it.

@zayac zayac deleted the vulkan-log-operation branch November 19, 2025 10:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants