Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help Increasing the amount of training/fine-tuning text to about 10k words #20

Open
sleekmike opened this issue Jul 11, 2020 · 0 comments

Comments

@sleekmike
Copy link

Hello,
I am trying to train/fine-tune the GPT-2 model using your wrapper, I have successfully made it to train by using a text file, however I would like to train the model with lots of text like 10 thousand words on a specific topic/domain and have it generate from 500-1000 words but I keep getting a strange error when I try it.
Please how do I increase the amount of training/fine-tuning text from the current limit to about 10,000 words?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant