Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix keys name for Transformer #2529

Merged
merged 8 commits into from
Jun 5, 2024
Merged

Conversation

Adel-Moumen
Copy link
Collaborator

@Adel-Moumen Adel-Moumen commented Apr 26, 2024

What does this PR do?

This PR solves an issue due to #2489. Indeed, this PR modify the name of a key in decoder transformer self.mutihead_attn to self.multihead_attn. Doing so breaks the loading of state dict since it does not recognize the previous key. In order to solve this problem, I introduce a new function called map_old_state_dict_weights which is directly applied within torch_recovery, average_checkpoints and _load_from_state_dict. The first is when you are loading a checkpoint, the second when you are trying to avg multiple checkpoints (the issue here is that you need to make sure that before doing the avg, every ckpts has the same keys), and the latter when you are loading the state_dict directly from the object which can be the case in our codebase (i.e. bypassing checkpointer).

Before submitting
  • Did you read the contributor guideline?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Does your code adhere to project-specific code style and conventions?

PR review

Reviewer checklist
  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified
  • Confirm that the changes adhere to compatibility requirements (e.g., Python version, platform)
  • Review the self-review checklist to ensure the code is ready for review

@Adel-Moumen Adel-Moumen marked this pull request as ready for review April 30, 2024 13:37
@Adel-Moumen Adel-Moumen self-assigned this Apr 30, 2024
@Adel-Moumen Adel-Moumen added bug Something isn't working important labels Apr 30, 2024
@Adel-Moumen Adel-Moumen added this to the v1.0.1 milestone Apr 30, 2024
speechbrain/utils/checkpoints.py Outdated Show resolved Hide resolved
speechbrain/utils/checkpoints.py Show resolved Hide resolved
speechbrain/utils/checkpoints.py Outdated Show resolved Hide resolved
@asumagic
Copy link
Collaborator

asumagic commented Jun 5, 2024

Seems like the PR current doesn't resolve issues with HF models loaded through inference, see #2549 (comment)

@asumagic
Copy link
Collaborator

asumagic commented Jun 5, 2024

Seems like the PR current doesn't resolve issues with HF models loaded through inference, see #2549 (comment)

To solve this, I made torch_parameter_transfer use the same logic, which I think is OK.

Other calls that do load model state_dicts(?) but shouldn't be an issue:

  • In _cycliclrloader

Other calls that use torch.load for other things that shouldn't be an issue:

  • In InputNormalization (arbitrary dict)
  • In schedulers.py (arbitrary dict)
  • In huggingface_transformers/* (non-SB state_dicts)
  • In k2_integration/* (seems like graphs and other k2 stuff)
  • In inference/TTS.py (arbitrary dict)
  • In aligner.py (arbitrary dict)
  • In recipes/Voicebank/ASR/CTC/train.py (not used and local, probably not a worry)

@asumagic
Copy link
Collaborator

asumagic commented Jun 5, 2024

OK, IMO the PR is ready for merging but maybe someone should sanity check the few commits I've done.

This key patching is signaled to the user with an INFO-level notice. In practice, it means it is not shown outside of the training logs (i.e. it isn't shown in the stdout/stderr but is in log.txt) or if the user enables a higher logging level.

In the case of mutihead_attn, this is shown multiple times per model (which IMO is fine, I'd rather show all occurrences of key patching).

It looks like this:

INFO:speechbrain.utils.checkpoints:Due to replacement compatibility rule '.mutihead_attn'->'.multihead_attn', renamed `state_dict['1.decoder.layers.0.mutihead_attn.att.in_proj_weight']`->`state_dict['1.decoder.layers.0.multihead_attn.att.in_proj_weight']`

@Adel-Moumen
Copy link
Collaborator Author

sounds good to me.

@asumagic asumagic merged commit e627b42 into speechbrain:develop Jun 5, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working important
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants