-
Notifications
You must be signed in to change notification settings - Fork 428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recommended hardware for gpt2-medium #14
Comments
What batchsize are you using? Also, are you using |
https://www.gwern.net/GPT-2#training Seems like a single 1080Ti with 11GB should be enough - if you switch to FP16 you wouldn't even need to use gradient checkpointing (Gwern used FP32). |
Tweaking the gradient checkpointing was enough to get 774M to work, but not 1.5b. We experimented with FP16 when we were trying to get 1.5b to work a 1080ti. It caused a lot of issues: the codebase multiplies by constants which can't be represented in FP16 (so it wound up generating '!!!!!!' infinitely, because that's the first BPE token or whatever) and once we figured that out and converted the pretrained model over to FP16, the output was completely screwed up, so something slightly more clever obviously was required to make reduced precision work. At which point we switched over to Colab TPUs (which opened up an entirely different kettle of worms relating to TPU iterations randomly freezing, our best guess so far is that some reshape or loop makes the TPU very unhappy). |
@gwern mind mentioning broadly what you tweaked? I am using checkpointing in pytorch and can't fit even 1 sample into a 12 gb gpu for the 774M version. |
I believe we needed something like this: diff --git a/src/model.py b/src/model.py
index 4e942d8..71092bc 100644
--- a/src/model.py
+++ b/src/model.py
@@ -124,10 +124,10 @@ def block(x, scope, *, past, hparams):
with tf.variable_scope(scope):
nx = x.shape[-1].value
a, present = attn(norm(x, 'ln_1'), 'attn', nx, past=past, hparams=hparams)
- x = x + a
+ x = x1 = x + a
m = mlp(norm(x, 'ln_2'), 'mlp', nx*4, hparams=hparams)
x = x + m
- return x, present
+ return x, present, x1
def past_shape(*, hparams, batch_size=None, sequence=None):
return [batch_size, hparams.n_layer, 2, hparams.n_head, sequence, hparams.n_embd // hparams.n_head]
@@ -161,9 +161,9 @@ def model(hparams, X, past=None, scope='model', reuse=tf.AUTO_REUSE):
pasts = tf.unstack(past, axis=1) if past is not None else [None] * hparams.n_layer
assert len(pasts) == hparams.n_layer
for layer, past in enumerate(pasts):
- h, present = block(h, 'h%d' % layer, past=past, hparams=hparams)
- if layer == 10:
- tf.add_to_collection('checkpoints', h)
+ h, present, x1 = block(h, 'h%d' % layer, past=past, hparams=hparams)
+ if layer < 48:
+ tf.add_to_collection('checkpoints', x1)
presents.append(present)
results['present'] = tf.stack(presents, axis=1)
h = norm(h, 'ln_f') |
@martinritchie I've been using FP16 O3 -- this is giving me a NaN error for the loss computation after like 55% of an epoch of training is done: I've also been using a batch size of 2, but I think the NaN error above is specific to FP16. @michaelklachko @gwern I am using FP16 and facing NaN errors. |
|
@martinritchie do you have any thoughts on how exactly to perform the gradient checkpointing when the underlying modules return variable-number of tensors, like here: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py#L478 I get errors like Perhaps I could unpack the variable-number of tensors explicitly in every sub-module of the |
That sounds like it would be a little heavy handed. Could you provide a minimal working example or show me how you are using it? |
https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py#L478 @martinritchie So the above line, I basically replaced it with:
And that initially failed saying that
And with that, the previous error went away and I now have Here's a thread I found about this on the PyTorch forums: https://discuss.pytorch.org/t/checkpoint-didnt-support-list-output/16957/3 |
Update: It looks like in the |
@martinritchie So I'm using gradient checkpointing like above (I changed the if condition to Any ideas how to get this to work? |
Hello!
By adapting the code in this repo, I've been able to fine-tune GPT and GPT-2 small using Topical-Chat with an EC2 instance with 8 Tesla V100 GPUs (32 GB memory each). However, I am unable to fine-tune GPT-2 medium on the same instance with the exact same hyper-parameters - I'm getting out of memory issues, presumably because GPT-2 medium is much larger than GPT-2 small. I haven't tried fine-tuning GPT-2 medium using Persona-Chat yet though.
Have you tried fine-tuning GPT-2 medium (from the
attention
branch in pytorch-pretrained-BERT) on large dialog datasets with long turns and if so, could you share the details of the underlying hardware used?Thanks!
The text was updated successfully, but these errors were encountered: