Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

throwing bad_alloc after calling model_fn #12

Closed
JasonJPu opened this issue Oct 31, 2018 · 13 comments
Closed

throwing bad_alloc after calling model_fn #12

JasonJPu opened this issue Oct 31, 2018 · 13 comments

Comments

@JasonJPu
Copy link
Contributor

Awesome research! This is a huge breakthrough for NLP.

I'm running BERT-large on a Cloud TPU doing fine-tuning for squad, but I keep getting

image

I have nothing else running so I'm not sure why the machine is running out of memory, and followed the steps exactly for setup (ie putting the pre-trained model on a google bucket, set up for TPU, etc).

@jacobdevlin-google
Copy link
Contributor

Hmm, I'm guessing the problem is that I do:

d = tf.data.Dataset.from_tensor_slices(feature_map)

To avoid having to write out data files, and this creates very large allocations that may or may not fail depending on exactly how tensorflow/python was compiled (e.g., what version of the C++ standard library its using).

I don't think that Dataset.from_generator() will work on the TPU because it implemented with a pyfunc, which weren't supported on TPU last time I checked (even for CPU data-processing). So I may have to change this to write to intermediate files.

@jacobdevlin-google
Copy link
Contributor

To check whether this is the issue, can you add a quick change where you truncate the SQuAD training data and if the bad alloc goes away.

def read_squad_examples(input_file, is_training):
...
# Add this right before return statement to only train on 10k examples
if is_training:
  examples = examples[0:10000]
  
return examples

@JasonJPu
Copy link
Contributor Author

It works when I truncate the training data!

@JasonJPu
Copy link
Contributor Author

image

Getting another error for memory during training after enqueuing and dequeuing batches of data from infeed and outfeed.

@jacobdevlin-google
Copy link
Contributor

I'll work on a fix for the first issue. Actually I just realized that I can just write the TF record files to output_dir so I don't need to change the interface.

For the second issue, can you try reducing the batch size to 32? On our internal version of TF using a batch size of 48 only uses 7.48 GB of memory but things might be different between versions and that's cutting it close anyways. I may need to find a better learning rate and num_epochs for batch size 32, but it should work as well as 48 in terms of final accuracy.

@JasonJPu
Copy link
Contributor Author

A batch size of 32 still results in being 200 mb over the memory capacity. Using 24 works for now, not sure how this will impact performance yet.

@jacobdevlin-google
Copy link
Contributor

That's a pretty huge difference, I'll coordinate with the TPU team here to figure out what's causing the mismatch.

@jacobdevlin-google
Copy link
Contributor

Jason,
I just checked in a (hopeful) fix for the first issue. It might be a little slower to start because it has to write out a TFRecord file to output_dir, then read it back in. But at most a few minutes to start. Can you try it without truncation (but batch size 24) to see if it fixes the bad_alloc and can train all the way through for you?

@jacobdevlin-google
Copy link
Contributor

For the memory issue, I just confirmed that the TPU memory usage different is in fact due to improvements that have been made to the TPU compiler since TF 1.11.0 was released. So with TF 1.11.0 it seems like 24 is the max batch size, and in the next upcoming version it will be 48. (I'm assuming you're using 1.11.0, since that's what the README said to use).

@jacobdevlin-google
Copy link
Contributor

I confirmed that fine-tuning BERT-Large using batch size of 24 with a learning rate of 3e-5 and a 2.0 epochs consistently gets 90.7% F1, the same as the paper. I updated the README to reflect this.

Thanks for bringing up this issue!

Please let me know if your bad_alloc issue goes away and if you're able to obtain 90.5%+ on SQuAD dev with a Cloud TPU.

@JasonJPu
Copy link
Contributor Author

JasonJPu commented Nov 1, 2018

Thanks so much Jacob! I'm no longer getting the bad_alloc issue, and I'm able to run BERT-Large with those parameters and I was able to get that F1 score.

I also tried using a TPU v3.0 with the parameters you originally gave (batch size of 48) and ran it with no issues, and got a F1 score of 90.9!

Amazing work!

@JasonJPu JasonJPu closed this as completed Nov 1, 2018
@webstruck
Copy link

@JasonJPu Can you please comment on inference performance? Is it comparable to QANet?

@JasonJPu
Copy link
Contributor Author

@webstruck inference is pretty slow, and depends on if you use TPU or GPU

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants