Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GGUF models (llama.cpp compatible) #411

Merged
merged 2 commits into from
Nov 16, 2023
Merged

Add GGUF models (llama.cpp compatible) #411

merged 2 commits into from
Nov 16, 2023

Conversation

ymcui
Copy link
Owner

@ymcui ymcui commented Nov 16, 2023

Description

Add links for GGUF (v3) models.
The users may directly download llama.cpp compatible GGUF models from our Hugging Face Model Hub.
We mainly provide q2_k, q3_k, q4_0, q4_k, q5_0, q5_k, q6_k, q8_0 quantized types.

Note: As llama.cpp is under active development, the format of the model may change in the future. You are encouraged to convert these models by yourself if model type changes.

Related Issue

None.

Explanation of Changes

copilot:walkthrough

@ymcui ymcui marked this pull request as ready for review November 16, 2023 07:34
@ymcui ymcui requested a review from iMountTai November 16, 2023 07:34
@ymcui ymcui merged commit d7d6211 into main Nov 16, 2023
1 check passed
@ymcui ymcui deleted the gguf-models branch November 16, 2023 08:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants