openGPT-X/Teuken-7B-instruct-commercial-v0.4 support #10539
Replies: 3 comments 1 reply
-
For languages like Spanish, French, German, Italien, ... this model may has an advantage, because of the (pre-)training dataset (see https://opengpt-x.de/models/teuken-7b-de/. |
Beta Was this translation helpful? Give feedback.
-
At https://bsky.app/profile/justine.lol/post/3lbxrnl6lps2l Justine Tunney describes an issue with the For example the conversion output looks like: ./convert_hf_to_gguf.py
And the verification of the printf '</s>' | llama-tokenize -m teuken-7b-instruct-commercial-v0.4-bf16.gguf --stdin --no-bos
The command should have returned the
This would explain I guess the |
Beta Was this translation helpful? Give feedback.
-
I've got some feedback from the Discord channel regarding llama support (translated from german):
|
Beta Was this translation helpful? Give feedback.
-
openGPT-X/Teuken-7B-instruct is a multilingual, open source models for Europe – instruction-tuned and trained in all 24 EU languages.
I've used
gguf-my-repo
to convert from safetensors to Q6_K at https://huggingface.co/cristianadam/Teuken-7B-instruct-commercial-v0.4-Q6_K-GGUFIn the meantime there are other GGUF repositories out there.
It seems to work:
I've asked Justine Tunney to have a look and she said:
and later
Since I'm just a
n00b
I just want to raise awareness of the this model and hope for some changes that would get it work correctly with llama.cpp 🙏Beta Was this translation helpful? Give feedback.
All reactions