Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue while generating pre training data #8

Open
008karan opened this issue Mar 12, 2020 · 15 comments
Open

Issue while generating pre training data #8

008karan opened this issue Mar 12, 2020 · 15 comments

Comments

@008karan
Copy link

I am generating pre training data for hindi, I am using sentence piece vocab for it. Getting the following error.

python build_pretraining_dataset.py --corpus-dir data --vocab-file spie
ce.vocab --output-dir out --max-seq-length 128 --num-processes 1
Job 0: Creating example writer
Job 0: Writing tf examples
Traceback (most recent call last):
  File "build_pretraining_dataset.py", line 230, in <module>
    main()
  File "build_pretraining_dataset.py", line 218, in main
    write_examples(0, args)
  File "build_pretraining_dataset.py", line 190, in write_examples
    example_writer.write_examples(os.path.join(args.corpus_dir, fname))
  File "build_pretraining_dataset.py", line 143, in write_examples
    example = self._example_builder.add_line(line)
  File "build_pretraining_dataset.py", line 50, in add_line
    bert_tokids = self._tokenizer.convert_tokens_to_ids(bert_tokens)
  File "/home/gamut/Downloads/electra-master/model/tokenization.py", line 130, in convert_tokens_to_ids
    return convert_by_vocab(self.vocab, tokens)
  File "/home/gamut/Downloads/electra-master/model/tokenization.py", line 91, in convert_by_vocab
    output.append(vocab[item])
KeyError: '[UNK]'

I found that this kind of error have this solution. As here, there is only input for vocab and not for spice model, generation of pre training data through spiece vocab is problem, any solution?

@stefan-it
Copy link
Contributor

stefan-it commented Mar 12, 2020

Hi @008karan,

you can use SentencePiece, but afterwards you need to convert the SPM vocab to a BERT-compatible one. E.g. you could use the following script:

import sys

sp_vocab = sys.argv[1]

# Static
bert_vocab = ['[PAD]']
bert_vocab += [f'unused{i}' for i in range(0,100)]
bert_vocab += ['[UNK]', '[CLS]', '[SEP]', '[MASK]']

sp_special_symbols = ['<unk>', '<s>', '</s>']

with open(sp_vocab, 'rt') as f_p:
        bert_vocab += [("##" + line.split()[0]).replace('##▁', '') for line in f_p
                                   if line.split()[0] not in sp_special_symbols]

        print("\n".join(bert_vocab))

(Input file would be the SPM vocab file, output is a BERT-compatible vocab file).

However, I would highly recommend using the Hugging Face Tokenizers library for that.

Here are some code snippets for using the Tokenizers library in order to create a BERT-compatible vocab:

https://github.com/stefan-it/turkish-bert/blob/master/CHEATSHEET.md#cased-model

I used it for creating the vocab file for the Turkish BERT model.

@008karan
Copy link
Author

@stefan-it I changed the vocab to berts format from your provided code snippet. Thanks!

One strange thing I found that my training data had a size of around 22 Gb and after generating pre-training data has size of 13 Gb. Usually, it should be bigger than the original data size.

@ddofer
Copy link

ddofer commented Mar 15, 2020

Would it be possible to provide an example of how to run the vocab/tokenizing in advance on this data, including the expected output sentencepiece vocab?

@parmarsuraj99
Copy link

Huggingface's tokenizer library worked like a charm.

from tokenizers import BertWordPieceTokenizer
tokenizer = BertWordPieceTokenizer(handle_chinese_chars=False, strip_accents=False, lowercase=False)
tokenizer.train(files="/content/corpus_dir/gu.txt")
tokenizer.save("BertWordPieceTokenizer")

@008karan
Copy link
Author

@parmarsuraj99 Are you training using HF library?

@parmarsuraj99
Copy link

Yes, to train tokenizer and then I use vocab.txt in build_pretraining_dataset.py. It works.

@008karan
Copy link
Author

Is HF Electra pertaining method available? They were going to publish it.

@parmarsuraj99
Copy link

For ELECTRA?maybe not. I tried importing Electra form transformers. Maybe They are working on that. But you can still use Bert Tokenizer. It works well.

@Sagar1094
Copy link

I created the vocabulary file using the above mentioned link and still facing the issue.

Job 0: Creating example writer
Job 0: Writing tf examples
Traceback (most recent call last):
File "build_pretraining_dataset.py", line 230, in
main()
File "build_pretraining_dataset.py", line 218, in main
write_examples(0, args)
File "build_pretraining_dataset.py", line 190, in write_examples
example_writer.write_examples(os.path.join(args.corpus_dir, fname))
File "build_pretraining_dataset.py", line 143, in write_examples
example = self._example_builder.add_line(line)
File "build_pretraining_dataset.py", line 50, in add_line
bert_tokids = self._tokenizer.convert_tokens_to_ids(bert_tokens)
File "/content/drive/My Drive/electra-master/model/tokenization.py", line 130, in convert_tokens_to_ids
return convert_by_vocab(self.vocab, tokens)
File "/content/drive/My Drive/electra-master/model/tokenization.py", line 91, in convert_by_vocab
output.append(vocab[item])
KeyError: '[UNK]'

@stefan-it
Copy link
Contributor

@Sagar1094 could you check that the path to the vocab file is correct? I've seen this error message also in cases, where the pathname to the vocab file was not correct (when using the build_pretraining_dataset.py script.

@Sagar1094
Copy link

Hi, I am using Google colab for running the script build_pretraining_dataset.py and the vocab.txt file is placed in google drive. I have mounted the drive as well.

Here is the code snippet:-
!python build_pretraining_dataset.py
--corpus-dir='/content/drive/My Drive/Corpus_dir/'
--output-dir='/content/drive/My Drive/tf/'
--vocab-file='/content/drive/My Drive/Vocab_dir/'
--max-seq-length=128
--do-lower-case

And output of !ls '/content/drive/My Drive/Vocab_dir/'
vocab.txt

Also the output of !head -20 '/content/drive/My Drive/Vocab_dir/vocab.txt'

[PAD]
[UNK]
[CLS]
[SEP]
[MASK]
0
1
2
3
4
5
6
7
8
9
a
b
c
d
e

Let me know if I am missing out on something. Thanks

@stefan-it
Copy link
Contributor

stefan-it commented Jun 4, 2020

Ah, you should pass `--vocab-file='/content/drive/My Drive/Vocab_dir/vocab.txt' as argument (using the folder name only is not sufficient) :)

@Sagar1094
Copy link

Thanks a lot, I just missed to mention the file name. Silly :)

@IssaIssa1
Copy link

Hi, I am using Google colab for running the script build_pretraining_dataset.py and the vocab.txt file is placed in google drive. I have mounted the drive as well.

Here is the code snippet:-
!python build_pretraining_dataset.py
--corpus-dir='/content/drive/My Drive/Corpus_dir/'
--output-dir='/content/drive/My Drive/tf/'
--vocab-file='/content/drive/My Drive/Vocab_dir/'
--max-seq-length=128
--do-lower-case

And output of !ls '/content/drive/My Drive/Vocab_dir/'
vocab.txt

Also the output of !head -20 '/content/drive/My Drive/Vocab_dir/vocab.txt'

[PAD]
[UNK]
[CLS]
[SEP]
[MASK]
0
1
2
3
4
5
6
7
8
9
a
b
c
d
e

Let me know if I am missing out on something. Thanks

May you please share the tokenizer training and saving script?

@elyorman
Copy link

elyorman commented Aug 4, 2020

(electra) ubuntu@nipa2020-0706:~/EL/electra/electra$ python3 build_openwebtext_pretraining_dataset.py --data-dir DATA_DIR --num-processes 5 Job 0: Creating example writer Process Process-1: Job 2: Creating example writer Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte Job 3: Creating example writer Job 1: Creating example writer Job 4: Creating example writer Process Process-3: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte Process Process-4: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte Process Process-2: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte Process Process-5: Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, **self._kwargs) File "build_openwebtext_pretraining_dataset.py", line 47, in write_examples do_lower_case=args.do_lower_case File "/home/ubuntu/EL/electra/electra/build_pretraining_dataset.py", line 126, in __init__ do_lower_case=do_lower_case) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 116, in __init__ self.vocab = load_vocab(vocab_file) File "/home/ubuntu/EL/electra/electra/model/tokenization.py", line 78, in load_vocab token = convert_to_unicode(reader.readline()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 179, in readline return self._prepare_value(self._read_buf.ReadLineAsString()) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/lib/io/file_io.py", line 98, in _prepare_value return compat.as_str_any(val) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 123, in as_str_any return as_str(value) File "/home/ubuntu/anaconda3/envs/electra/lib/python3.7/site-packages/tensorflow_core/python/util/compat.py", line 93, in as_text return bytes_or_text.decode(encoding) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd7 in position 0: invalid continuation byte

I am having this error while running build_openwebtext_pretraining_dataset.py --data-dir DATA_DIR --num-processes 5
Can anyone help with this please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants