Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to process raw text files to create similar "PrettyBig" model? #2

Closed
GenTxt opened this issue Jun 6, 2019 · 5 comments
Closed

Comments

@GenTxt
Copy link

GenTxt commented Jun 6, 2019

Thanks for the repo. Have sampling working fine from your "PrettyBig" model.

I would like to generate my own dataset from 6 gigs of raw, header free Gutenberg text files
and I was wondering how this can be done using datasets/create_tfrecords.py

Using tar I've created "RS_2017-04-4_data.xz" from the raw text files and placed in "openwebtext/RS_2017-04-4_data.xz"

I've edited one of your .json files to include the paths in the required "files.json" (# This file should contain paths to all your RS_--_data. files)

run create_tfrecords.py and creates parse/RS_2017-04 folders

90 minutes later from the terminal

Parsing chunk 1 took 54.41039276123047 seconds
-- 0.0% of chunk 1's docs yielded text.
Saving chunk 1 took 1.6689300537109375e-06 seconds
Parsing chunk 2 took 49.19901156425476 seconds
-- 0.0% of chunk 2's docs yielded text.
Saving chunk 2 took 1.1920928955078125e-06 second

... parse/RS_2017-04 is still empty

Stopped at this point because I assume this is wrong. Any suggestions how I can prepare a similar model as "PrettyBig" using standard raw text files?

Cheers,

P.S Do you plan on releasing the 1.7 model?

@kizinfo
Copy link

kizinfo commented Jun 6, 2019

Modify create_tfrecords.py at the top, filename and path declarations point to the text files. make sure the line:
files = glob.glob(os.path.join(base_dir, "*.txt"))
properly points to and indexes the source text files.
also need a copy of the existing model encoder.
and set files_per as the number of text files to use for each chunk of tfrecords. I already had them split into chunks with ~300 books each separated by <|EndOfText|>. the .py doesn't add an end text token so if you didn't already put them in your txt files and want them need to modify the code further.

have been able to train 4gig of text on colab TPU over a number of days to 60k iterations. Results arn't as good as finetuning existing models (yet). The biggest model that still fit colab was:

"n_head": 17,
"lr": 0.00025,
"warmup_steps": 2000,
"beta1": 0.0,
"decay_exponent": 0.8,
"opt_name": "adafactor",
"decay_type": "pow",
"train_batch_size": 8,
"max_steps": 200000,
"predict_batch_size": 1,
"eval_batch_size": 8,
"iterations": 100,
"n_embd": 1020,
"n_ctx": 1024,
 "n_layer": 34,

With such small batch sizes, I'm not sure it will ever work well and without bfloat16 working on inference can't get bigger models on colab. The author says he trained models on a pod (which with 'evalutaion' pricing would cost tens of thousands of dollars).

From the original GPT2 paper the Authors claimed that the variation in training text was important for bigger models so it might be that training a model from scratch on a domain specific 4gig corpus won't ever do as well as training on a general 40gig corpus and then finetuning on the domain.

Have been able to get some really good results from the original 345M GPT2 models by finetuning on domain specific content that maintains context well through multiple paragraphs.

@GenTxt
Copy link
Author

GenTxt commented Jun 6, 2019

Thanks for the information. A lot to test and as you say it's likely true that " ... a domain specific 4gig corpus won't ever do as well as training on a general 40gig corpus and then finetuning on the domain."

I'm having good results too with genre specific models based on the OpenAi 345M. Can only hope they decide to release their larger models within 6 months.

Closing this now

@GenTxt GenTxt closed this as completed Jun 6, 2019
@ConnorJL
Copy link
Owner

ConnorJL commented Jun 6, 2019

I'm sorry the scripts are pretty poorly documented, I'm planning on making a better custom dataset setup when I get the time. You basically just want to use create_tfrecords.py as kizinfo said to generate the .tfrecords files from your txt files.

You do NOT have to add <|endoftext|> manually! If you use my bpe_text function (in inputs.py) as input, it automatically samples "stitch" amount of texts from your dataset, concatenates them with <|endoftext|> in between and then samples n_ctx amount of tokens from the final result. Make sure that "stitch" is set so that (your minimal length text * stitch) >= n_ctx.

I plan on releasing the 1.5B model, see my blogposts about it here and here.

@GenTxt
Copy link
Author

GenTxt commented Jun 7, 2019 via email

@GenTxt
Copy link
Author

GenTxt commented Jun 7, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants