-
Notifications
You must be signed in to change notification settings - Fork 4.8k
Description
In the course of trying to build a pipeline parallel GPT2 model, I've been having some problems trying to enable activation checkpointing.
Naively adding activation_checkpoint_interval=1 to my pipeline module (as instructed in the docs/tutorials) results in the following error:
Traceback (most recent call last):
File "train_pipeline.py", line 93, in <module>
loss = model_engine.train_batch()
File "/root/anaconda3/lib/python3.8/site-packages/deepspeed/runtime/pipe/engine.py", line 275, in train_batch
self._exec_schedule(sched)
File "/root/anaconda3/lib/python3.8/site-packages/deepspeed/runtime/pipe/engine.py", line 1164, in _exec_schedule
self._exec_instr(**cmd.kwargs)
File "/root/anaconda3/lib/python3.8/site-packages/deepspeed/runtime/pipe/engine.py", line 604, in _exec_backward_pass
torch.autograd.backward(tensors=(outputs, ), grad_tensors=(grad_tensors, ))
File "/root/anaconda3/lib/python3.8/site-packages/torch/autograd/__init__.py", line 130, in backward
Variable._execution_engine.run_backward(
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
In the course of debugging, I came across this fun function in https://github.com/microsoft/DeepSpeed/blob/34c83a5a64da67ff186eec4fdd7ab3c1badf0486/deepspeed/runtime/pipe/module.py#L569
def _is_checkpointable(self, funcs):
if self.__class__.__name__ == 'GPT2ModelPipe':
return all('ParallelTransformerLayerPipe' in f.__class__.__name__
for f in funcs)
params = [f.parameters() for f in funcs if isinstance(f, torch.nn.Module)]
return any(len(list(p)) > 0 for p in params)
in order to fix my error, i had to edit this function in the source code to return True for my model's transformer blocks, which are, of course, named differently.
It seems to me this is a very brittle way to achieve this. Is there anything in the works for a more user-friendly interface?
Perhaps each layer in the pipeline model should be required to have an .is_checkpointable attribute? or at least a warning should be thrown, or this issue should be mentioned in the docs somewhere?
I'd be willing to put in some work to fix this if anyone has any solutions in mind, as I'd love to be able to run the deepspeed library without maintaining my own fork!