Skip to content

Conversation

@giladgd
Copy link
Member

@giladgd giladgd commented Oct 19, 2025

Description of change

  • fix(Vulkan): include integrated GPU memory - adapt to a change in llama.cpp
  • fix(Vulkan): deduplicate the same device coming from different drivers
  • fix: adapt Llama chat wrappers to breaking llama.cpp changes
  • fix: internal log level
  • docs(Vulkan): recommend installing LLVM on Windows

Pull-Request Checklist

  • Code is up-to-date with the master branch
  • npm run format to apply eslint formatting
  • npm run test passes with this change
  • This pull request links relevant issues as Fixes #0000
  • There are new or updated unit tests validating the change
  • Documentation has been updated to reflect this change
  • The new commits and pull request title follow conventions explained in pull request guidelines (PRs that do not follow this convention will not be merged)

@giladgd giladgd self-assigned this Oct 19, 2025
@giladgd giladgd requested a review from ido-pluto October 20, 2025 16:55
Copy link
Contributor

@ido-pluto ido-pluto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@giladgd giladgd merged commit 47475ac into master Oct 26, 2025
19 checks passed
@giladgd giladgd deleted the gilad/vulkanMemory branch October 26, 2025 15:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants