-
Notifications
You must be signed in to change notification settings - Fork 913
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: weight encoder.embed_tokens.weight does not exist #556
Comments
Same issue here with Full startup log below:
When sharding disabled (same error, but easier to read):
|
Looking at it in more detail this is the same issue as |
I am also getting same error with falcon7B model, with most of the MPT and falcon models. Model: falcon-7B
|
The PR above should help. It's only a matter of weight naming |
- Look at `transformers` base class to check for `_key_to_ignore_on_load_missing` or `_tied_weights` which are the standard attributes to select the keys to NOT save on disk (since they are ignored) - Modified safetensors code (to be reflected in safetensors even if it's an internal function). - Will not work for trust_remote_code=True repos (like santacoder). Should help with : #555 and : #501 and #556 and #482 (comment)
- Look at `transformers` base class to check for `_key_to_ignore_on_load_missing` or `_tied_weights` which are the standard attributes to select the keys to NOT save on disk (since they are ignored) - Modified safetensors code (to be reflected in safetensors even if it's an internal function). - Will not work for trust_remote_code=True repos (like santacoder). Should help with : huggingface/text-generation-inference#555 and : huggingface/text-generation-inference#501 and huggingface/text-generation-inference#556 and huggingface/text-generation-inference#482 (comment)
Thanks @Narsil, it does work for me too with
|
Try to update docker and run latest image:
|
Thanks @chumpblocckami - I did and it does work well with the
|
Shall I create a separate issue for this? |
The same issue happens for OPT
|
Got it too, server version 1.0.3 (using docker), and also with latest, |
- Look at `transformers` base class to check for `_key_to_ignore_on_load_missing` or `_tied_weights` which are the standard attributes to select the keys to NOT save on disk (since they are ignored) - Modified safetensors code (to be reflected in safetensors even if it's an internal function). - Will not work for trust_remote_code=True repos (like santacoder). Should help with : huggingface/text-generation-inference#555 and : huggingface/text-generation-inference#501 and huggingface/text-generation-inference#556 and huggingface/text-generation-inference#482 (comment)
- Look at `transformers` base class to check for `_key_to_ignore_on_load_missing` or `_tied_weights` which are the standard attributes to select the keys to NOT save on disk (since they are ignored) - Modified safetensors code (to be reflected in safetensors even if it's an internal function). - Will not work for trust_remote_code=True repos (like santacoder). Should help with : huggingface/text-generation-inference#555 and : huggingface/text-generation-inference#501 and huggingface/text-generation-inference#556 and huggingface/text-generation-inference#482 (comment)
After running:
I recieve:
I tried multiple small models but every one raise the same issue.
Any tips?
Thanks
The text was updated successfully, but these errors were encountered: