Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

double forward in goal gpt #5

Closed
Howuhh opened this issue Feb 4, 2022 · 1 comment
Closed

double forward in goal gpt #5

Howuhh opened this issue Feb 4, 2022 · 1 comment

Comments

@Howuhh
Copy link

Howuhh commented Feb 4, 2022

Hi! I noticed one more not straightforward thing in goal conditioned version of GPT.

Here:

gx = torch.cat([goal_embeddings, x], dim=1)
gx = self.blocks(gx)
x = gx[:, self.observation_dim:]
#### /goal
x = self.blocks(x)
## [ B x T x embedding_dim ]
x = self.ln_f(x)

After you append goal embeddings to the main sequence, you do self.blocks twice. Is that how it's intended to work? Shouldn't one time be enough, since all embeddings will have all needed information about the goal due to the attention mechanism.

@jannerm
Copy link
Owner

jannerm commented Feb 15, 2022

Good catch! commit fix

@jannerm jannerm closed this as completed Feb 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants