You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
Attempting to python3 convert-hf-to-gguf.py with NVIDIA's latest NVEmbed model yields a NotImplementedError: Architecture 'NVEmbedModel' not supported! Add support for NVEmbedModel architecture.
Motivation
NVIDIA recently released their NVEmbed embeddings model based on the Mistral 7B decoder that ranks #1 on the MTEB leaderboard. It would be nice to see support for this in llama.cpp.
Possible Implementation
I'm not sure how different it would be than existing embeddings architectures. I'm aware other decoder-based models like SFR Embedding Mistral have GGUF quants which work, so I figure the NVEmbed model is structured similarly. Then it's mostly a matter of writing in a new model class for it in convert-hf-to-gguf.py.
The text was updated successfully, but these errors were encountered:
It looks like NVEmbed is basically Mistral but with non-causal attention and "latent attention" pooling. I hadn't seen latent attention pooling before, but judging from the modeling code on HF, it's just another attention layer on top of the last hidden states.
Right now in llama.cpp, we can tell causal-by-default models like Mistral to use non-causal attention. If we get #7477 merged, that will allow general pooling on these models. The only catch is we don't have latent pooling implemented, but it should be quite straightforward.
If we get #7477 merged, that will allow general pooling on these models. The only catch is we don't have latent pooling implemented, but it should be quite straightforward.
Prerequisites
Feature Description
Attempting to
python3 convert-hf-to-gguf.py
with NVIDIA's latest NVEmbed model yields aNotImplementedError: Architecture 'NVEmbedModel' not supported!
Add support forNVEmbedModel
architecture.Motivation
NVIDIA recently released their NVEmbed embeddings model based on the Mistral 7B decoder that ranks #1 on the MTEB leaderboard. It would be nice to see support for this in llama.cpp.
Possible Implementation
I'm not sure how different it would be than existing embeddings architectures. I'm aware other decoder-based models like SFR Embedding Mistral have GGUF quants which work, so I figure the NVEmbed model is structured similarly. Then it's mostly a matter of writing in a new model class for it in
convert-hf-to-gguf.py
.The text was updated successfully, but these errors were encountered: