Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

March 2025 Update #1126

Merged
merged 7 commits into from
Mar 20, 2025
Merged

Conversation

martindevans
Copy link
Member

@martindevans martindevans commented Mar 9, 2025

Updated to llama.cpp be7c3034108473beda214fd1d7c98fd6a7a3bdf5, built with https://github.com/martindevans/LLamaSharp/actions/runs/13838284757.

This includes the musl changes introduced by @Coopaguard in #1100.

  • Windows CPU
  • Windows CUDA
  • Windows Vulkan
  • Linux CPU
  • Linux CPU (musl)
  • Linux CUDA
  • Linux Vulkan
  • MacOS

@LoicDagnas
Copy link

Hello @martindevans, how difficult would it be to update the binaries up to the work added by this PR ggml-org/llama.cpp#12322 to have image embedders back on GPU ? 🙏🏼

@m0nsky
Copy link
Contributor

m0nsky commented Mar 12, 2025

I haven't started testing the other platforms yet, I would be fine with it. The difference seems to be only 2 days so hopefully not too difficult on the C# side.

@martindevans
Copy link
Member Author

martindevans commented Mar 12, 2025

There have been no changes to llama.h since 6fefc05a7a4e676780ae10b0a4d0728e5281f367, which is what this PR is based on, therefore it should be as simple as running another build with a newer commit. I'll start one running now.

I'll wait a few hours and run one later this evening. I didn't realise this PR had only just been merged so it doesn't have a corresponding llama.cpp release yet.

@martindevans
Copy link
Member Author

martindevans commented Mar 13, 2025

@martindevans
Copy link
Member Author

@Coopaguard @LoicDagnas @m0nsky: I've updated to be7c3034108473beda214fd1d7c98fd6a7a3bdf5.

@martindevans
Copy link
Member Author

On my PC all the tests pass except the kernel memory ones. You can probably go ahead with testing the various platforms while I look into that

@SignalRT
Copy link
Collaborator

On my PC all the tests pass except the kernel memory ones. You can probably go ahead with testing the various platforms while I look into that

I have the same issue running the KernelMemory tests, but that brings me to another question. Why does this library exist in LlamaSharp? Isn’t there a connector in KernelMemory to use LlamaSharp? It seems to duplicate work.

@martindevans
Copy link
Member Author

That's a good point. I don't really know what the history is there or how they compare though. I've only ever touched the km integration to fix breaking changes (like this PR).

@martindevans
Copy link
Member Author

KM tests should be fixed now

@SignalRT
Copy link
Collaborator

KM tests should be fixed now

It's fixed.

I get a warning on Rider IDE related with a security issue on "SixLabors.ImageSharp" Version="3.1.5" and I tested also version 3.1.7 without problems. Reference: https://osv.dev/vulnerability/GHSA-2cmq-823j-5qj8

I do not commit the change to upgrade the SixLabors.ImageSharp package.

@martindevans
Copy link
Member Author

I'll have a look at the ImageSharp thing in a separate PR, thanks for testing it.

@m0nsky
Copy link
Contributor

m0nsky commented Mar 17, 2025

Unit tests passed on Windows CUDA & Linux CUDA.

Test application is running fine on:

  • Windows CPU
  • Windows CUDA
  • Windows Vulkan
  • Linux CPU
  • Linux CUDA
  • Linux Vulkan

@martindevans martindevans merged commit 6a92faf into SciSharp:master Mar 20, 2025
6 checks passed
@martindevans martindevans deleted the binary_update_march_2025 branch March 20, 2025 01:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants