You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While I understand setting the tokens to 0, I'm curious about masking the positional embeddings. If we mask both tokens and positional embeddings to 0, how does the model distinguish between different tokens? Wouldn't this cause the model to treat these tokens identically? Would it make sense to add position embeddings after masking?
We can use causal attention to remedy this, but I'm wondering if I've misinterpreted the token masking process. Could you clarify this approach? Thank you!
The text was updated successfully, but these errors were encountered:
Thanks for your interest in 4M! I believe there is some confusion due to the double meaning of the word "mask" in our code and in the literature.
The decoder_mask that you're referring to is an invalid / ignore mask, indicating which tokens should be entirely ignored by the decoder when the number of valid tokens is less than the decoder sequence length (akin to padding tokens in LMs). This is also why that same operation is performed in the encoder, to remove invalid tokens there.
If you're looking for the "T5 / MAE" token masking implementation, the bulk of it is defined in masking.py as part of data loading, where we define:
Which tokens to give to the encoder
Which tokens to give to the decoder
Which tokens to discard altogether (i.e., which are "invalid")
Then, in forward_mask_encoder() and forward_mask_decoder() of the forward pass, we gather the valid tokens out of all concatenated tokens for the encoder / decoder respectively such that the valid tokens are at the beginning of the sequence, and the invalid ones at the end. The function that sets the decoder tokens to 0 for image-like modalities (i.e. BERT/MAE masking) is in the cat_decoder_tensors() function here.
Hope this makes everything clearer. If you have any further questions, please don't hesitate to ask.
Thank you for open sourcing your amazing work.
I have a question regarding the token masking implementation: https://github.com/apple/ml-4m/blob/main/fourm/models/fm.py#L429
While I understand setting the tokens to 0, I'm curious about masking the positional embeddings. If we mask both tokens and positional embeddings to 0, how does the model distinguish between different tokens? Wouldn't this cause the model to treat these tokens identically? Would it make sense to add position embeddings after masking?
We can use causal attention to remedy this, but I'm wondering if I've misinterpreted the token masking process. Could you clarify this approach? Thank you!
The text was updated successfully, but these errors were encountered: