Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on Token Masking in 4M Implementation #11

Closed
yzhang511 opened this issue Jul 1, 2024 · 1 comment
Closed

Question on Token Masking in 4M Implementation #11

yzhang511 opened this issue Jul 1, 2024 · 1 comment

Comments

@yzhang511
Copy link

yzhang511 commented Jul 1, 2024

Thank you for open sourcing your amazing work.

I have a question regarding the token masking implementation: https://github.com/apple/ml-4m/blob/main/fourm/models/fm.py#L429

While I understand setting the tokens to 0, I'm curious about masking the positional embeddings. If we mask both tokens and positional embeddings to 0, how does the model distinguish between different tokens? Wouldn't this cause the model to treat these tokens identically? Would it make sense to add position embeddings after masking?

We can use causal attention to remedy this, but I'm wondering if I've misinterpreted the token masking process. Could you clarify this approach? Thank you!

@dmizr
Copy link
Collaborator

dmizr commented Jul 5, 2024

Hi @yzhang511 ,

Thanks for your interest in 4M! I believe there is some confusion due to the double meaning of the word "mask" in our code and in the literature.

The decoder_mask that you're referring to is an invalid / ignore mask, indicating which tokens should be entirely ignored by the decoder when the number of valid tokens is less than the decoder sequence length (akin to padding tokens in LMs). This is also why that same operation is performed in the encoder, to remove invalid tokens there.

If you're looking for the "T5 / MAE" token masking implementation, the bulk of it is defined in masking.py as part of data loading, where we define:

  • Which tokens to give to the encoder
  • Which tokens to give to the decoder
  • Which tokens to discard altogether (i.e., which are "invalid")

Then, in forward_mask_encoder() and forward_mask_decoder() of the forward pass, we gather the valid tokens out of all concatenated tokens for the encoder / decoder respectively such that the valid tokens are at the beginning of the sequence, and the invalid ones at the end. The function that sets the decoder tokens to 0 for image-like modalities (i.e. BERT/MAE masking) is in the cat_decoder_tensors() function here.

Hope this makes everything clearer. If you have any further questions, please don't hesitate to ask.

Best, David

@dmizr dmizr closed this as completed Jul 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants