Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about length parameters #22

Closed
yumoxu opened this issue Jul 22, 2020 · 1 comment
Closed

Questions about length parameters #22

yumoxu opened this issue Jul 22, 2020 · 1 comment

Comments

@yumoxu
Copy link

yumoxu commented Jul 22, 2020

Hello, thanks for your great work! It's novel and inspiring.

I am trying to extend PPLM for some downstream applications (with a different pre-trained model and a different discriminator). I went through your code, but am not sure of the function of parameters --length and --grad_length.

From my understanding so far, --length is used to create this generation loop that decodes a token at a time, so I guess it controls the generation length. Can you please confirm this?

If so, this condition does not seem straightforward to me: stepsize is set to 0 for generation steps in [grad_length, length], and as a result, this leads to no update to the model (gradient=0).

But I also noticed the default value of grad_length is 10000, which is much larger than length (whose default value is 100). If grad_length>length, the above condition will always be False and the original stepsize will always be used. Therefore, I am confused about the use of this condition. Can you also please clarify this?

Thanks!

@dathath
Copy link
Contributor

dathath commented Jul 26, 2020

Hello,

That is correct. -- length is to control the generation length.

--grad-length is the number of time-steps where the key-value pairs are updated to steer generation. Step-size is set 0, but we the updated latents for the previous time-steps stay the same (they are not reset), so the generation is likely to retain some of the conditioning. We found that this can help prevent some degeneration of the language. See Table S19 in the Appendix -- it shows how grad-length can be used to help preserve fluency when generating longer passages.

@dathath dathath closed this as completed Jul 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants