-
Notifications
You must be signed in to change notification settings - Fork 338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to process raw text files to create similar "PrettyBig" model? #2
Comments
Modify create_tfrecords.py at the top, filename and path declarations point to the text files. make sure the line: have been able to train 4gig of text on colab TPU over a number of days to 60k iterations. Results arn't as good as finetuning existing models (yet). The biggest model that still fit colab was:
With such small batch sizes, I'm not sure it will ever work well and without bfloat16 working on inference can't get bigger models on colab. The author says he trained models on a pod (which with 'evalutaion' pricing would cost tens of thousands of dollars). From the original GPT2 paper the Authors claimed that the variation in training text was important for bigger models so it might be that training a model from scratch on a domain specific 4gig corpus won't ever do as well as training on a general 40gig corpus and then finetuning on the domain. Have been able to get some really good results from the original 345M GPT2 models by finetuning on domain specific content that maintains context well through multiple paragraphs. |
Thanks for the information. A lot to test and as you say it's likely true that " ... a domain specific 4gig corpus won't ever do as well as training on a general 40gig corpus and then finetuning on the domain." I'm having good results too with genre specific models based on the OpenAi 345M. Can only hope they decide to release their larger models within 6 months. Closing this now |
I'm sorry the scripts are pretty poorly documented, I'm planning on making a better custom dataset setup when I get the time. You basically just want to use create_tfrecords.py as kizinfo said to generate the .tfrecords files from your txt files. You do NOT have to add <|endoftext|> manually! If you use my bpe_text function (in inputs.py) as input, it automatically samples "stitch" amount of texts from your dataset, concatenates them with <|endoftext|> in between and then samples n_ctx amount of tokens from the final result. Make sure that "stitch" is set so that (your minimal length text * stitch) >= n_ctx. I plan on releasing the 1.5B model, see my blogposts about it here and here. |
Thanks. I got the basics working based on kizinfo's detailed reply but it
didn't seem to actually reduce the error loss. I will check the scripts
again as per your advice.
Looking forward to testing your 1.5B model. Checking your blogs now.
Cheers
…On Thu, Jun 6, 2019 at 10:25 AM ConnorJL ***@***.***> wrote:
I'm sorry the scripts are pretty poorly documented, I'm planning on making
a better custom dataset setup when I get the time. You basically just want
to use create_tfrecords.py as kizinfo said to generate the .tfrecords files
from your txt files.
You do NOT have to add <|endoftext|> manually! If you use my bpe_text
function (in inputs.py) as input, it automatically samples "stitch" amount
of texts from your dataset, concatenates them with <|endoftext|> in between
and then samples n_ctx amount of tokens from the final result. Make sure
that "stitch" is set so that (your minimal length text * stitch) >= n_ctx.
I plan on releasing the 1.5B model, see my blogposts about it here
***@***.***/gpt2-counting-consciousness-and-the-curious-hacker-323c6639a3a8>
and here
***@***.***/replicating-gpt2-1-5b-86454a7f26af>.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#2?email_source=notifications&email_token=AFMAWPNSEHUT25BDOHDNDETPZEM5LA5CNFSM4HUUW4FKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXDALOI#issuecomment-499516857>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFMAWPMKVACKEAQMWIA7FPLPZEM5LANCNFSM4HUUW4FA>
.
|
Enjoyed reading your blogs and I'm in full agreement.
To the points you raised I'm wondering if your models can be used for
fine-tuning a custom corpus model the same as the current Open-Ai 345
version?
I think this is a great part of OpenAi's fear, false as it may be, that
prevents them from releasing the full code.
I'm getting great results fine-tuning corpus files with nshepperd's repo
and selected branches using the current OpenAi 345M
For example::
https://github.com/mkturkcan/GPTune (provides pre-trained models to
download)
based on the finetuning code released by nshepperd
The one drawback is waiting for the possible release of the larger OpenAi
models to fully test these repos..
It would be fantastic if your version could be tweaked to offer the same
ability.
Cheers, and thanks again for the great work.
…On Thu, Jun 6, 2019 at 8:06 PM Aaron Allan ***@***.***> wrote:
Thanks. I got the basics working based on kizinfo's detailed reply but it
didn't seem to actually reduce the error loss. I will check the scripts
again as per your advice.
Looking forward to testing your 1.5B model. Checking your blogs now.
Cheers
On Thu, Jun 6, 2019 at 10:25 AM ConnorJL ***@***.***> wrote:
> I'm sorry the scripts are pretty poorly documented, I'm planning on
> making a better custom dataset setup when I get the time. You basically
> just want to use create_tfrecords.py as kizinfo said to generate the
> .tfrecords files from your txt files.
>
> You do NOT have to add <|endoftext|> manually! If you use my bpe_text
> function (in inputs.py) as input, it automatically samples "stitch" amount
> of texts from your dataset, concatenates them with <|endoftext|> in between
> and then samples n_ctx amount of tokens from the final result. Make sure
> that "stitch" is set so that (your minimal length text * stitch) >= n_ctx.
>
> I plan on releasing the 1.5B model, see my blogposts about it here
> ***@***.***/gpt2-counting-consciousness-and-the-curious-hacker-323c6639a3a8>
> and here
> ***@***.***/replicating-gpt2-1-5b-86454a7f26af>.
>
> —
> You are receiving this because you modified the open/close state.
> Reply to this email directly, view it on GitHub
> <#2?email_source=notifications&email_token=AFMAWPNSEHUT25BDOHDNDETPZEM5LA5CNFSM4HUUW4FKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXDALOI#issuecomment-499516857>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AFMAWPMKVACKEAQMWIA7FPLPZEM5LANCNFSM4HUUW4FA>
> .
>
|
Thanks for the repo. Have sampling working fine from your "PrettyBig" model.
I would like to generate my own dataset from 6 gigs of raw, header free Gutenberg text files
and I was wondering how this can be done using datasets/create_tfrecords.py
Using tar I've created "RS_2017-04-4_data.xz" from the raw text files and placed in "openwebtext/RS_2017-04-4_data.xz"
I've edited one of your .json files to include the paths in the required "files.json" (# This file should contain paths to all your RS_--_data. files)
run create_tfrecords.py and creates parse/RS_2017-04 folders
90 minutes later from the terminal
Parsing chunk 1 took 54.41039276123047 seconds
-- 0.0% of chunk 1's docs yielded text.
Saving chunk 1 took 1.6689300537109375e-06 seconds
Parsing chunk 2 took 49.19901156425476 seconds
-- 0.0% of chunk 2's docs yielded text.
Saving chunk 2 took 1.1920928955078125e-06 second
... parse/RS_2017-04 is still empty
Stopped at this point because I assume this is wrong. Any suggestions how I can prepare a similar model as "PrettyBig" using standard raw text files?
Cheers,
P.S Do you plan on releasing the 1.7 model?
The text was updated successfully, but these errors were encountered: