New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lm/train.py Data Loss Error #7
Comments
hi Muhammad, my guess is that you don't have the RealNews in the right format -- it only accepts .tfrecord files. However, I don't think you'll be able to train a generator model on CPU, since it's quite large. |
I am trying to prepare the .tfrecord files before training the generation model. I run the prepare_lm_data.py with the generator=mega~dataset=p0.94.jsonl as input_fn. Could you please help me to solve the following error: Traceback (most recent call last): Thanks. |
I am getting following error while training even i have converted input files into .tfrecord. I didn't find any help regrding this error. Will you please provide what this error is about and how it could be resolved? Thanks |
@helenalee1994: unfortunately that file (for discrimination) can't be used for generation. Among other things, it has additional entries, and the metadata fields have different names 😢 so for instance, I used 'text' instead of article. would it help if I publicly shared a small subset of RealNews for debugging? for the full version, please fill out the google form on the github repo. @Muhammd-Hamza-Sabir You need to pass in --input_file=your_tf_record.tfrecord or something like that 😄 |
@rowanz Thanks for your response. |
@rowanz Thanks for your quick reply. It would be a great help if you publicly share a small subset of RealNews. 😄😄😄 |
@Muhammd-Hamza-Sabir Hallo, have u solved the input_file error when converting input files into .tfrecord? |
I am trying to train generator model on CPU and getting error given below. Could you please provide why that error is occured:
DataLossError (see above for traceback): corrupted record at 0
The text was updated successfully, but these errors were encountered: