-
Notifications
You must be signed in to change notification settings - Fork 916
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically map deduplicated safetensors weights to their original values #501
Automatically map deduplicated safetensors weights to their original values #501
Conversation
This PR automatically points tensors that were removed due to deduplication to their still existing twin. In `server.text_generation_server.utils.convert.py#convert_file`, duplicated tensors are removed and logged to the "metadata" dictionary. However, this dictionary was not yet used during loading. This requires explicit remapping when loading the models (as mentioned in the docstring). What does this fix? We currently cannot load `h2oai/h2ogpt-oig-oasst1-falcon-40b` with the unmodified server, since the `transformer.word_embeddings.weight` weight is equal to `lm_head.weight` and is automatically removed.
Hey thanks for the PR !. Unfortunately that metadata is kept for hard debugging but it's missing crucial information, namely it doesn't recall the the tensor was a slice or not. And metadata will not necessarily be present. I suggest a different fix: diff --git a/server/text_generation_server/models/flash_rw.py b/server/text_generation_server/models/flash_rw.py
index 5f963bf..33079ac 100644
--- a/server/text_generation_server/models/flash_rw.py
+++ b/server/text_generation_server/models/flash_rw.py
@@ -48,7 +48,13 @@ class FlashRWSharded(FlashCausalLM):
torch.distributed.barrier(group=self.process_group)
filenames = weight_files(model_id, revision=revision, extension=".safetensors")
- weights = Weights(filenames, device, dtype, process_group=self.process_group)
+ weights = Weights(
+ filenames,
+ device,
+ dtype,
+ process_group=self.process_group,
+ aliases={"transformer.word_embeddings.weight": ["lm_head.weight"]},
+ )
config.quantize = quantize Would that work ? |
Without having tested the fix (don't have access to the GPU server during the weekends), this seems like it would also fix my problem. I initially also thought about doing this. However, it is still only a fix for this specific model. The metadata may not be a trustworthy source for tensor aliases, but it may still be a valid fallback, no? It could also be improved by adding a namespacing prefix like Curious to hear what you think |
Indeed, but the other one can lead to potentially catastrophic failure (loading the wrong weights) which is even worse imo. Are you ok if I update this PR ? (If I can, otherwise I'll just create a new one with you as co-author). |
Alright, fair point. Silently loading the wrong weights is definitely undesired. |
This reverts commit d6bb10f.
I tested the new fix "on my machine" and it worked. A colleage used it then and it didn't work for him. The difference? In his version of Unless the conversion can be made to be consistent, this means we should either also alias the weights the other way around. Maybe the same as done in the current patch already, or automatically (two-way aliases by default). I don't know why his safetensors conversion is different, I guess because the order of dictionary keys is not guaranteed to be consistent, which then affects safetensors.torch_remove_duplicate_names. Pretty sure that using its |
- Look at `transformers` base class to check for `_key_to_ignore_on_load_missing` or `_tied_weights` which are the standard attributes to select the keys to NOT save on disk (since they are ignored) - Modified safetensors code (to be reflected in safetensors even if it's an internal function). - Will not work for trust_remote_code=True repos (like santacoder). Should help with : #555 and : #501 and #556 and #482 (comment)
- Look at `transformers` base class to check for `_key_to_ignore_on_load_missing` or `_tied_weights` which are the standard attributes to select the keys to NOT save on disk (since they are ignored) - Modified safetensors code (to be reflected in safetensors even if it's an internal function). - Will not work for trust_remote_code=True repos (like santacoder). Should help with : huggingface/text-generation-inference#555 and : huggingface/text-generation-inference#501 and huggingface/text-generation-inference#556 and huggingface/text-generation-inference#482 (comment)
Any update on this PR? I also encountered the issue of missing lm_head.weight when I try to load Falcon model with text-generation-inference. |
Hi @lppllppl920 Thanks for the ping. I'm not sure why it wasn't merged. |
…values (#501) (#761) # What does this PR do? CI cehck for #501 This PR automatically points tensors that were removed due to deduplication to their still existing twin. In `server.text_generation_server.utils.convert.py#convert_file`, tensors that have a value equal to another tensor are removed from the list of weights. Their name, and the name of the still-existing "twin" are logged to the "metadata" dictionary. However, this dictionary was not yet used during loading. This requires explicit in-code remapping when loading the models (as mentioned in the docstring). This PR adds some simple code to check, during loading, if a weight is one of those removed weights. It then automatically retrieves the values of its still-existing "twin" instead. ## What does this fix? We currently cannot load `h2oai/h2ogpt-oig-oasst1-falcon-40b` with the unmodified server, since the `transformer.word_embeddings.weight` weight is equal to `lm_head.weight` and is automatically removed. The falcon code, however, still expects this weight to exist. I could have also added some extra checks to the model itself, though that would only be a workaround. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ @OlivierDehaene OR @Narsil --> --------- # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ @OlivierDehaene OR @Narsil --> Co-authored-by: Vincent Brouwers <vincentbrouwers9@gmail.com> Co-authored-by: Vincent Brouwers <vincent.brouwers@ing.com>
Thank you! |
The fix actually doesn't work: I discovered it while testing. Fix coming soon: https://github.com/huggingface/text-generation-inference/pull/762/files#diff-2111bae5f77d998a3fe39888906b3c7be122313241ed6b69b0b0baf5abb735bbL57 |
- Look at `transformers` base class to check for `_key_to_ignore_on_load_missing` or `_tied_weights` which are the standard attributes to select the keys to NOT save on disk (since they are ignored) - Modified safetensors code (to be reflected in safetensors even if it's an internal function). - Will not work for trust_remote_code=True repos (like santacoder). Should help with : huggingface/text-generation-inference#555 and : huggingface/text-generation-inference#501 and huggingface/text-generation-inference#556 and huggingface/text-generation-inference#482 (comment)
- Look at `transformers` base class to check for `_key_to_ignore_on_load_missing` or `_tied_weights` which are the standard attributes to select the keys to NOT save on disk (since they are ignored) - Modified safetensors code (to be reflected in safetensors even if it's an internal function). - Will not work for trust_remote_code=True repos (like santacoder). Should help with : huggingface/text-generation-inference#555 and : huggingface/text-generation-inference#501 and huggingface/text-generation-inference#556 and huggingface/text-generation-inference#482 (comment)
What does this PR do?
This PR automatically points tensors that were removed due to deduplication to their still existing twin.
In
server.text_generation_server.utils.convert.py#convert_file
, tensors that have a value equal to another tensor are removed from the list of weights. Their name, and the name of the still-existing "twin" are logged to the "metadata" dictionary. However, this dictionary was not yet used during loading. This requires explicit in-code remapping when loading the models (as mentioned in the docstring).This PR adds some simple code to check, during loading, if a weight is one of those removed weights. It then automatically retrieves the values of its still-existing "twin" instead.
What does this fix?
We currently cannot load
h2oai/h2ogpt-oig-oasst1-falcon-40b
with the unmodified server, since thetransformer.word_embeddings.weight
weight is equal tolm_head.weight
and is automatically removed. The falcon code, however, still expects this weight to exist. I could have also added some extra checks to the model itself, though that would only be a workaround.Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.