You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After you append goal embeddings to the main sequence, you do self.blocks twice. Is that how it's intended to work? Shouldn't one time be enough, since all embeddings will have all needed information about the goal due to the attention mechanism.
The text was updated successfully, but these errors were encountered:
Hi! I noticed one more not straightforward thing in goal conditioned version of GPT.
Here:
trajectory-transformer/trajectory/models/transformers.py
Lines 288 to 295 in e0b5f12
After you append goal embeddings to the main sequence, you do self.blocks twice. Is that how it's intended to work? Shouldn't one time be enough, since all embeddings will have all needed information about the goal due to the attention mechanism.
The text was updated successfully, but these errors were encountered: