Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support the bit-shuffling changes from llama.cpp #198

Closed
philpax opened this issue May 9, 2023 · 3 comments
Closed

Support the bit-shuffling changes from llama.cpp #198

philpax opened this issue May 9, 2023 · 3 comments
Labels
issue:enhancement New feature or request
Milestone

Comments

@philpax
Copy link
Collaborator

philpax commented May 9, 2023

A new file version is being introduced to change how the tensors are stored on-disk: ggerganov/llama.cpp#1305

We will need to support this version, as well as the older versions.

@philpax philpax added the issue:enhancement New feature or request label May 9, 2023
@philpax
Copy link
Collaborator Author

philpax commented May 13, 2023

It's been merged: ggerganov/llama.cpp#1405

There doesn't seem to be a migration path at present, so let's wait a bit: ggerganov/llama.cpp#1408

@philpax
Copy link
Collaborator Author

philpax commented May 16, 2023

This is done in #226, but I'd like to set up a migration path before I close this

@philpax philpax added this to the 0.2 milestone May 18, 2023
@philpax
Copy link
Collaborator Author

philpax commented May 22, 2023

No migration path for now. See #261

@philpax philpax closed this as completed May 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
issue:enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant