-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue while generating pre training data #8
Comments
Hi @008karan, you can use SentencePiece, but afterwards you need to convert the SPM vocab to a BERT-compatible one. E.g. you could use the following script: import sys
sp_vocab = sys.argv[1]
# Static
bert_vocab = ['[PAD]']
bert_vocab += [f'unused{i}' for i in range(0,100)]
bert_vocab += ['[UNK]', '[CLS]', '[SEP]', '[MASK]']
sp_special_symbols = ['<unk>', '<s>', '</s>']
with open(sp_vocab, 'rt') as f_p:
bert_vocab += [("##" + line.split()[0]).replace('##▁', '') for line in f_p
if line.split()[0] not in sp_special_symbols]
print("\n".join(bert_vocab)) (Input file would be the SPM vocab file, output is a BERT-compatible vocab file). However, I would highly recommend using the Hugging Face Tokenizers library for that. Here are some code snippets for using the Tokenizers library in order to create a BERT-compatible vocab: https://github.com/stefan-it/turkish-bert/blob/master/CHEATSHEET.md#cased-model I used it for creating the vocab file for the Turkish BERT model. |
@stefan-it I changed the vocab to berts format from your provided code snippet. Thanks! One strange thing I found that my training data had a size of around 22 Gb and after generating pre-training data has size of 13 Gb. Usually, it should be bigger than the original data size. |
Would it be possible to provide an example of how to run the vocab/tokenizing in advance on this data, including the expected output sentencepiece vocab? |
Huggingface's tokenizer library worked like a charm.
|
@parmarsuraj99 Are you training using HF library? |
Yes, to train tokenizer and then I use vocab.txt in build_pretraining_dataset.py. It works. |
Is HF Electra pertaining method available? They were going to publish it. |
For ELECTRA?maybe not. I tried importing Electra form transformers. Maybe They are working on that. But you can still use Bert Tokenizer. It works well. |
I created the vocabulary file using the above mentioned link and still facing the issue. Job 0: Creating example writer |
@Sagar1094 could you check that the path to the vocab file is correct? I've seen this error message also in cases, where the pathname to the vocab file was not correct (when using the |
Hi, I am using Google colab for running the script build_pretraining_dataset.py and the vocab.txt file is placed in google drive. I have mounted the drive as well. Here is the code snippet:- And output of !ls '/content/drive/My Drive/Vocab_dir/' Also the output of !head -20 '/content/drive/My Drive/Vocab_dir/vocab.txt' [PAD] Let me know if I am missing out on something. Thanks |
Ah, you should pass `--vocab-file='/content/drive/My Drive/Vocab_dir/vocab.txt' as argument (using the folder name only is not sufficient) :) |
Thanks a lot, I just missed to mention the file name. Silly :) |
May you please share the tokenizer training and saving script? |
I am having this error while running build_openwebtext_pretraining_dataset.py --data-dir DATA_DIR --num-processes 5 |
I am generating pre training data for hindi, I am using sentence piece vocab for it. Getting the following error.
I found that this kind of error have this solution. As here, there is only input for vocab and not for spice model, generation of pre training data through spiece vocab is problem, any solution?
The text was updated successfully, but these errors were encountered: