-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove the beam search generator #38
Comments
Hi Alessandro, generation for this type of model has to occur one character at a time, because each character is chosen based on all of the previous characters. You can disable beam search by typing "--beam_width 1" during chat, which will make generation faster (and much worse), but it will still choose one character at a time. |
@geroale just wanted to share something similar I wanted and how I did it, although it does not return the output instantly, it does return it as a whole (sentence) at once. Usefull for creating APIs or integrating the output in your own code
You just store the characters in var[] and once the loop ends, you join them in a single statement. |
Hi @pender , first of all, thank you so much for your repo. I found it really helpful for learning about RNNs, the code is enough clear, the model based on reddit well trained and everything is cool.
I have only one curiosity: is it possible to generate the answer instantly from the model, without the char after char generation?
I think that I have understand that this generation effect is in the beam search generator but I can't fully get how it works.
It would be great if the model could write the answer in one instant, or add a parameter by which the user can decide what type of generation use.
Thanks for your work and everything.
Alessandro.
The text was updated successfully, but these errors were encountered: