Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gemma Is Not Supported #533

Closed
zddtpy opened this issue Feb 22, 2024 · 8 comments
Closed

Gemma Is Not Supported #533

zddtpy opened this issue Feb 22, 2024 · 8 comments
Labels
enhancement New feature or request

Comments

@zddtpy
Copy link

zddtpy commented Feb 22, 2024

This is a. net 8.0 console project. When I was running it, the following error occurred,'Attempted to read or write protected memory. This is often an indication that other memory is corrupt'. Can anyone provide some help.thanks.The model used is Google's gemma-2b

@swharden

This comment was marked as outdated.

@martindevans martindevans added the Upstream Tracking an issue in llama.cpp label Feb 22, 2024
@martindevans
Copy link
Member

Gemma is a very new model format that's not supported in LLamaSharp yet. Support was added to llama.cpp yesterday, we'll hopefully upgrade our version within a month or so :)

@zddtpy
Copy link
Author

zddtpy commented Feb 22, 2024

Hi @zddtpy, I had a similar error when I followed the steps on https://github.com/SciSharp/LLamaSharp?tab=readme-ov-file#installation

In my case the error was resolved by removing the "cuda" package and installing the "cpu" one instead.

emm.maybe i have install the "cpu",but not work

@zddtpy
Copy link
Author

zddtpy commented Feb 22, 2024

Gemma is a very new model format that's not supported in LLamaSharp yet. Support was added to llama.cpp yesterday, we'll hopefully upgrade our version within a month or so :)

looking forward to you

@swharden

This comment was marked as off-topic.

@martindevans martindevans added enhancement New feature or request and removed Upstream Tracking an issue in llama.cpp labels Feb 24, 2024
@martindevans martindevans changed the title Attempted to read or write protected memory. This is often an indication that other memory is corrupt Gemma Is Not Supported Feb 24, 2024
@zsogitbe
Copy link
Contributor

There is no special code needed in LLamaSharp for Gemma to work. You just need to recompile the Cpp code yourself and it will work (I have tested the Gemma models already).

@martindevans
Copy link
Member

You just need to recompile the Cpp code yourself and it will work (I have tested the Gemma models already)

Note that in general you can't just recompile an up-to-date version of llama.cpp and use it with LLamaSharp. llama.cpp is constantly making small breaking changes to their API, so almost every update of the binaries requires a bit of tweaking to the C# binding layer.

@martindevans
Copy link
Member

gemma should be supported since the last set of binary updates (0.11.x), so I'll close this issue now :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: ✅ Done
Development

No branches or pull requests

4 participants