You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As far as I understand the model and the usage of GPT2, shouldn't the get_dummy_token function return torch.ones() * -100 instead of torch.zeros()? This is because we should be ignoring the outputs of GPT2 for these prefix inputs. Currently, it's forcing the model to predict token 0 which is the exclamation mark ("!").
Hi,
As far as I understand the model and the usage of GPT2, shouldn't the
get_dummy_token
function returntorch.ones() * -100
instead oftorch.zeros()
? This is because we should be ignoring the outputs of GPT2 for these prefix inputs. Currently, it's forcing the model to predict token 0 which is the exclamation mark ("!").Reference lines: https://github.com/rmokady/CLIP_prefix_caption/blob/main/train.py#L222-L223
Thanks!
The text was updated successfully, but these errors were encountered: