Skip to content

Trying to convert intfloat/e5-mistral-7b-instruct to GGUF #4786

Answered by s3nh
AshD asked this question in Q&A
Discussion options

You must be logged in to vote

I probably did it, just by modify the tensor_mapping.py and update TensorNameMap dictionary using names stricly from Lora adapter.
Upload modified one here:
https://gist.github.com/s3nh/a06f827bc492eb4b667db09d44b922e7

Then:

convert to fp16.bin base model and Lora
merge them
quantize with llama.cpp/quantize
I got feedback that It looks ok so you can give it a try and prove me wrong eventually.

https://huggingface.co/s3nh/intfloat-e5-mistral-7b-instruct-GGUF

Replies: 4 comments 4 replies

Comment options

You must be logged in to vote
1 reply
@ggerganov
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
3 replies
@distel-mw
Comment options

@AshD
Comment options

@dranger003
Comment options

Answer selected by AshD
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
6 participants