Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatically map deduplicated safetensors weights to their original values #501

Merged

Conversation

Vinno97
Copy link
Contributor

@Vinno97 Vinno97 commented Jun 28, 2023

What does this PR do?

This PR automatically points tensors that were removed due to deduplication to their still existing twin.

In server.text_generation_server.utils.convert.py#convert_file, tensors that have a value equal to another tensor are removed from the list of weights. Their name, and the name of the still-existing "twin" are logged to the "metadata" dictionary. However, this dictionary was not yet used during loading. This requires explicit in-code remapping when loading the models (as mentioned in the docstring).

This PR adds some simple code to check, during loading, if a weight is one of those removed weights. It then automatically retrieves the values of its still-existing "twin" instead.

What does this fix?

We currently cannot load h2oai/h2ogpt-oig-oasst1-falcon-40b with the unmodified server, since the transformer.word_embeddings.weight weight is equal to lm_head.weight and is automatically removed. The falcon code, however, still expects this weight to exist. I could have also added some extra checks to the model itself, though that would only be a workaround.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

This PR automatically points tensors that were removed due to
deduplication to their still existing twin.

In `server.text_generation_server.utils.convert.py#convert_file`,
duplicated tensors are removed and logged to the "metadata" dictionary.
However, this dictionary was not yet used during loading. This requires
explicit remapping when loading the models (as mentioned in the
docstring).

What does this fix?
We currently cannot load `h2oai/h2ogpt-oig-oasst1-falcon-40b` with the
unmodified server, since the `transformer.word_embeddings.weight` weight
is equal to `lm_head.weight` and is automatically removed.
@Narsil
Copy link
Collaborator

Narsil commented Jun 30, 2023

Hey thanks for the PR !.

Unfortunately that metadata is kept for hard debugging but it's missing crucial information, namely it doesn't recall the the tensor was a slice or not. And metadata will not necessarily be present.

I suggest a different fix:

diff --git a/server/text_generation_server/models/flash_rw.py b/server/text_generation_server/models/flash_rw.py
index 5f963bf..33079ac 100644
--- a/server/text_generation_server/models/flash_rw.py
+++ b/server/text_generation_server/models/flash_rw.py
@@ -48,7 +48,13 @@ class FlashRWSharded(FlashCausalLM):

         torch.distributed.barrier(group=self.process_group)
         filenames = weight_files(model_id, revision=revision, extension=".safetensors")
-        weights = Weights(filenames, device, dtype, process_group=self.process_group)
+        weights = Weights(
+            filenames,
+            device,
+            dtype,
+            process_group=self.process_group,
+            aliases={"transformer.word_embeddings.weight": ["lm_head.weight"]},
+        )

         config.quantize = quantize

Would that work ?

@Vinno97
Copy link
Contributor Author

Vinno97 commented Jul 1, 2023

Without having tested the fix (don't have access to the GPU server during the weekends), this seems like it would also fix my problem.

I initially also thought about doing this. However, it is still only a fix for this specific model. The metadata may not be a trustworthy source for tensor aliases, but it may still be a valid fallback, no? It could also be improved by adding a namespacing prefix like alias-- to the key, to prevent conflicts.

Curious to hear what you think

@Narsil
Copy link
Collaborator

Narsil commented Jul 3, 2023

However, it is still only a fix for this specific model.

Indeed, but the other one can lead to potentially catastrophic failure (loading the wrong weights) which is even worse imo.

Are you ok if I update this PR ? (If I can, otherwise I'll just create a new one with you as co-author).

@Vinno97
Copy link
Contributor Author

Vinno97 commented Jul 4, 2023

Alright, fair point. Silently loading the wrong weights is definitely undesired.
Sure, update this PR!

@Vinno97
Copy link
Contributor Author

Vinno97 commented Jul 6, 2023

I tested the new fix "on my machine" and it worked. A colleage used it then and it didn't work for him. The difference? In his version of model-00001-of-00018.safetensors, the transformer.word_embeddings.weight was there, but the lm_head.weight was gone.

Unless the conversion can be made to be consistent, this means we should either also alias the weights the other way around. Maybe the same as done in the current patch already, or automatically (two-way aliases by default).

I don't know why his safetensors conversion is different, I guess because the order of dictionary keys is not guaranteed to be consistent, which then affects safetensors.torch_remove_duplicate_names. Pretty sure that using its preferred_names argument to would fix this weight mapping issue, but it'd be a model-specific fix in a generic part of the code-base. Unless that can work, but it'd be nicer to keep it in flash_rw.py

OlivierDehaene pushed a commit that referenced this pull request Jul 7, 2023
- Look at `transformers` base class to check for
  `_key_to_ignore_on_load_missing` or `_tied_weights` which are the
  standard attributes to select the keys to NOT save on disk (since they
  are ignored)

- Modified safetensors code (to be reflected in safetensors even if it's
  an internal function).
  
- Will not work for trust_remote_code=True repos (like santacoder).

Should help with :
#555
and : #501
and #556
and
#482 (comment)
AIProphet added a commit to AIProphet/text-generation-inference that referenced this pull request Jul 12, 2023
- Look at `transformers` base class to check for
  `_key_to_ignore_on_load_missing` or `_tied_weights` which are the
  standard attributes to select the keys to NOT save on disk (since they
  are ignored)

- Modified safetensors code (to be reflected in safetensors even if it's
  an internal function).
  
- Will not work for trust_remote_code=True repos (like santacoder).

Should help with :
huggingface/text-generation-inference#555
and : huggingface/text-generation-inference#501
and huggingface/text-generation-inference#556
and
huggingface/text-generation-inference#482 (comment)
@lppllppl920
Copy link

Any update on this PR? I also encountered the issue of missing lm_head.weight when I try to load Falcon model with text-generation-inference.

@Narsil
Copy link
Collaborator

Narsil commented Aug 2, 2023

Hi @lppllppl920 Thanks for the ping. I'm not sure why it wasn't merged.

@Narsil Narsil changed the base branch from main to dev August 2, 2023 17:54
@Narsil Narsil merged commit 9bcac46 into huggingface:dev Aug 2, 2023
2 of 5 checks passed
Narsil added a commit that referenced this pull request Aug 2, 2023
…values (#501) (#761)

# What does this PR do?

CI cehck for #501 

This PR automatically points tensors that were removed due to
deduplication to their still existing twin.

In `server.text_generation_server.utils.convert.py#convert_file`,
tensors that have a value equal to another tensor are removed from the
list of weights. Their name, and the name of the still-existing "twin"
are logged to the "metadata" dictionary. However, this dictionary was
not yet used during loading. This requires explicit in-code remapping
when loading the models (as mentioned in the docstring).

This PR adds some simple code to check, during loading, if a weight is
one of those removed weights. It then automatically retrieves the values
of its still-existing "twin" instead.

## What does this fix?
We currently cannot load `h2oai/h2ogpt-oig-oasst1-falcon-40b` with the
unmodified server, since the `transformer.word_embeddings.weight` weight
is equal to `lm_head.weight` and is automatically removed. The falcon
code, however, still expects this weight to exist. I could have also
added some extra checks to the model itself, though that would only be a
workaround.

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor

guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting

docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?


## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @


@OlivierDehaene OR @Narsil

 -->

---------



# What does this PR do?

<!--
Congratulations! You've made it this far! You're not quite done yet
though.

Once merged, your PR is going to appear in the release notes with the
title you set, so make sure it's a great title that fully reflects the
extent of your awesome contribution.

Then, please replace this with a description of the change and which
issue is fixed (if applicable). Please also include relevant motivation
and context. List any dependencies (if any) that are required for this
change.

Once you're done, someone will review your PR shortly (see the section
"Who can review?" below to tag some potential reviewers). They may
suggest changes to make the code even better. If no one reviewed your PR
after a week has passed, don't hesitate to post a new comment
@-mentioning the same persons---sometimes notifications get lost.
-->

<!-- Remove if not applicable -->

Fixes # (issue)


## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?


## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @


@OlivierDehaene OR @Narsil

 -->

Co-authored-by: Vincent Brouwers <vincentbrouwers9@gmail.com>
Co-authored-by: Vincent Brouwers <vincent.brouwers@ing.com>
@lppllppl920
Copy link

Hi @lppllppl920 Thanks for the ping. I'm not sure why it wasn't merged.

Thank you!

@Narsil
Copy link
Collaborator

Narsil commented Aug 3, 2023

verdant621 added a commit to verdant621/text-generation-inference that referenced this pull request Oct 19, 2023
- Look at `transformers` base class to check for
  `_key_to_ignore_on_load_missing` or `_tied_weights` which are the
  standard attributes to select the keys to NOT save on disk (since they
  are ignored)

- Modified safetensors code (to be reflected in safetensors even if it's
  an internal function).
  
- Will not work for trust_remote_code=True repos (like santacoder).

Should help with :
huggingface/text-generation-inference#555
and : huggingface/text-generation-inference#501
and huggingface/text-generation-inference#556
and
huggingface/text-generation-inference#482 (comment)
cr313 added a commit to cr313/text-generation-inference-load-test that referenced this pull request Apr 19, 2024
- Look at `transformers` base class to check for
  `_key_to_ignore_on_load_missing` or `_tied_weights` which are the
  standard attributes to select the keys to NOT save on disk (since they
  are ignored)

- Modified safetensors code (to be reflected in safetensors even if it's
  an internal function).
  
- Will not work for trust_remote_code=True repos (like santacoder).

Should help with :
huggingface/text-generation-inference#555
and : huggingface/text-generation-inference#501
and huggingface/text-generation-inference#556
and
huggingface/text-generation-inference#482 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants