Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bert training model failed when add --deepspeed_transformer_kernel #1155

Open
garvct opened this issue Jun 12, 2021 · 9 comments
Open

Bert training model failed when add --deepspeed_transformer_kernel #1155

garvct opened this issue Jun 12, 2021 · 9 comments

Comments

@garvct
Copy link

garvct commented Jun 12, 2021

Environment
A100x8 GPU's
Using container nvcr.io#nvidia/pytorch:21.05-py3
apt update
pip3 install nvidia-pyindex
pip3 install nvidia-tensorflow
pip3 install numpy --upgrade
export TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0 7.5 8.0 8.6+PTX"
DS_BUILD_OPS=1 pip3 install deepspeed
pip3 install mpi4py

root@x8a100-0000:/workspace# ds_report

DeepSpeed C++/CUDA extension op report

NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.

JIT compiled ops requires ninja
ninja .................. [OKAY]

op name ................ installed .. compatible

cpu_adam ............... [YES] ...... [OKAY]
fused_adam ............. [YES] ...... [OKAY]
fused_lamb ............. [YES] ...... [OKAY]
sparse_attn ............ [YES] ...... [OKAY]
transformer ............ [YES] ...... [OKAY]
stochastic_transformer . [YES] ...... [OKAY]
async_io ............... [NO] ....... [OKAY]
transformer_inference .. [YES] ...... [OKAY]
utils .................. [YES] ...... [OKAY]
quantizer .............. [YES] ...... [OKAY]

DeepSpeed general environment info:
torch install path ............... ['/opt/conda/lib/python3.8/site-packages/torch']
torch version .................... 1.9.0a0+2ecb2c7
torch cuda version ............... 11.3
nvcc version ..................... 11.3
deepspeed install path ........... ['/opt/conda/lib/python3.8/site-packages/deepspeed']
deepspeed info ................... 0.4.0, unknown, unknown
deepspeed wheel compiled w. ...... torch 1.9, cuda 11.3
root@x8a100-0000:/workspace#

Without --deepspeed_transformer_kernel training job runs fine on multiple A100 GPU's, but when I add --deepspeed_transformer_kernel

!!!! kernel execution error. (m: 6144, n: 2048, k: 2048, error: 13)
!!!! kernel execution error. (m: 2048, n: 2048, k: 8192, error: 13)
!!!! kernel execution error. (m: 6144, n: 2048, k: 2048, error: 13)
!!!! kernel execution error. (m: 512, n: 512, k: 64, error: 13)
!!!! kernel execution error. (m: 64, n: 512, k: 512, error: 13)
Traceback (most recent call last):
File "train.py", line 519, in
main()
File "train.py", line 511, in main
run(args, model, optimizer)
File "train.py", line 482, in run
train(args, model, optimizer)
File "train.py", line 180, in train
validation(args, global_data_samples, model)
File "train.py", line 102, in validation
_, (tmp_mlm_loss, tmp_nsp_loss) = model.network(batch, log=False)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1086, in forward
loss = self.module(*inputs, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/workspace/nfs2/pndall/bert/src/bert/pytorch/nvidia/modelingpreln.py", line 1156, in forward
sequence_output, pooled_output = self.bert(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/workspace/nfs2/pndall/bert/src/bert/pytorch/nvidia/modelingpreln.py", line 981, in forward
encoded_layers = self.encoder(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/workspace/nfs2/pndall/bert/src/bert/pytorch/nvidia/modelingpreln.py", line 602, in forward
hidden_states = layer_module(hidden_states, attention_mask)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1015, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/deepspeed/ops/transformer/transformer.py", line 592, in forward
return DeepSpeedTransformerFunction.apply(hidden_states,
File "/opt/conda/lib/python3.8/site-packages/deepspeed/ops/transformer/transformer.py", line 208, in forward
layer_norm_mean) = forward_func(config.layer_id,
RuntimeError: /home/scratch.efomenko_sw/ml/wip/cask.wip/xmma/cask_plugin/src/gemm/runner.cu:107: cudaFuncSetAttribute(kernel_entry, cudaFuncAttributeMaxDynamicSharedMemorySize, integer_cast<int32_t>(launch_configs[0].smemSizeInBytes)): an illegal memory access was encountered
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: an illegal memory access was encountered

Do you have any suggestions on how I can fix this?
Thank you.

@RezaYazdaniAminabadi
Copy link
Contributor

Hi @garvct

Can you please share the config that you are running this with so that I can repro on my side?
Thanks,
Reza

@garvct
Copy link
Author

garvct commented Jun 14, 2021

python3 train.py
--job_name "BERT Large"
--deepspeed_config ${SOURCE_CODE_PATH}/ds_config.json
--train_table=$SOURCE_CODE_PATH/train
--val_tables=$SOURCE_CODE_PATH/val
--cluster=local
--do_train=True
--do_eval=False
--keep_checkpoint_every_n_iterations=5000
--bert_config_file=$SOURCE_CODE_PATH/bert_config.json
--save_checkpoints_steps=5000
--train_batch_size=$MICRO_BATCH_SIZE
--eval_batch_size=4
--max_seq_length=512
--max_predictions_per_seq=80
--num_train_steps=5000
--max_eval_steps=200
--learning_rate=0.0019
--vocab_path=$SOURCE_CODE_PATH/vocab.txt
--use_xla
--do_whole_word_mask
--use_fp16
--sparse_as_dense=True
--hvd_fp16_compres=True
--use_hvd=True
--use_cpu_opt=False
--prefetch_const_batches=3
--prefetch_const=10
--use_dynamic_masking
--different_segments
--second_task="SOP"
--do_lower_case=True
--do_rm_diacritic=False
--sent_sep=""
--scale_lr_gpu=False
--opt FusedLAMB
--model_type=bert
--grad_accum_steps=$ACCUMULATION_STEPS
--num_warmup_steps=10000
--inter_op_parallelism_threads=4
--save_val_best=False
--memory_optimization=recomputation
--custom_memory_optimization split_optimizer_state
--ln_dtype=float32
--mlm_dtype=float32
--sample_end_token=""
--grad_accum_dtype float16
--deepspeed_transformer_kernel 2>&1 | tee stdouterr_$$

--grad_accum_dtype float16 2>&1 | tee stdouterr_$$

{
"train_batch_size": 64,
"train_micro_batch_size_per_gpu": 4,
"steps_per_print": 1000,
"prescale_gradients": false,
"zero_allow_untested_optimizer": true,
"optimizer": {
"type": "lamb",
"params": {
"lr": 0.0019,
"weight_decay": 0.01,
"bias_correction": false
}
},
"gradient_clipping": 1.0,

"wall_clock_breakdown": false,

"fp16": {
"enabled": true,
"loss_scale": 0,
"initial_scale_power": 20
},
"zero_optimization": {
"stage": 1
}
}

@RezaYazdaniAminabadi
Copy link
Contributor

Thanks for sharing the script and config.

Can I ask what hidden_size, number of heads, and number of layers you are using here? It seems you are using hidden size as 2048, however, I don't understand where the m=6144 is coming from! Is this a multiplicand of sequence-length (512) or number of heads!?

!!!! kernel execution error. (m: 6144, n: 2048, k: 2048, error: 13)
!!!! kernel execution error. (m: 2048, n: 2048, k: 8192, error: 13)
!!!! kernel execution error. (m: 6144, n: 2048, k: 2048, error: 13)
!!!! kernel execution error. (m: 512, n: 512, k: 64, error: 13)
!!!! kernel execution error. (m: 64, n: 512, k: 512, error: 13)

Also, this is a 1-GPU run, right?

@garvct
Copy link
Author

garvct commented Jun 14, 2021

DeepSpeed Transformer config is {'layer_id': 23, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 8192, 'heads': 32, 'attn_dropout_ratio': 0.1, 'hidden_dropout_ratio': 0.1, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': 42, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}

This was an attempt to run on 16 A100

@RezaYazdaniAminabadi
Copy link
Contributor

Hi @garvct

I am still not able to repro this issue that you are seeing.
I wonder if you can run this test on your side which uses exactly the same config as yours:

pyteset tests/unit/test_cuda_backward.py::test_backward[4-2048-512-32-24-True-True-0.05]

DeepSpeed Transformer config is  {'layer_id': 23, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 8192, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible':
False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}

Thanks,
Reza

@garvct
Copy link
Author

garvct commented Jun 29, 2021

pytest tests/unit/test_cuda_backward.py::test_backward[4-2048-512-32-24-True-True-0.05]
========================================== test session starts ==========================================
platform linux -- Python 3.8.8, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /workspace/nfs2/yandex/DeepSpeed
plugins: pythonpath-0.7.3, cov-2.11.1, hypothesis-4.50.8
collected 0 items

========================================= no tests ran in 3.32s =========================================
ERROR: not found: /workspace/nfs2/yandex/DeepSpeed/tests/unit/test_cuda_backward.py::test_backward[4-2048-512-32-24-True-True-0.05]
(no name '/workspace/nfs2/yandex/DeepSpeed/tests/unit/test_cuda_backward.py::test_backward[4-2048-512-32-24-True-True-0.05]' in any of [<Module tests/unit/test_cuda_backward.py>])

pytest tests/unit/test_cuda_backward.py
============================= test session starts ==============================
platform linux -- Python 3.8.8, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /workspace/nfs2/yandex/DeepSpeed
plugins: pythonpath-0.7.3, cov-2.11.1, hypothesis-4.50.8
collected 5 items

tests/unit/test_cuda_backward.py ..FF. [100%]

=================================== FAILURES ===================================
_________________ test_backward[8-1600-128-2-3-True-True-0.05] _________________

batch_size = 8, hidden_size = 1600, seq_len = 128, heads = 2, num_layers = 3
is_preln = True, use_fp16 = True, atol = 0.05

@pytest.mark.parametrize('batch_size, hidden_size, seq_len, heads, num_layers, is_preln, use_fp16, atol',
                         [
                             (8,1600,128,25,3,True,True, 0.05),
                             (8,160,128,2,3,True,True, 0.1),
                             (8,1600,128,2,3,True,True, 0.05),
                             (3,1024,119,16,24,True,False, 0.05),
                             (3,1024,115,16,24,True,True, 0.05),
                             #(1024,128,10,2,2,False,False, 0.1),
                             #(3,1024,52,16,24,False,True, 0.2),
                             #(3,128,51,2,24,False,False, 0.1),
                             #(3,128,54,2,24,False,True, 0.2),
                         ]) # yapf: disable
def test_backward(batch_size,
                  hidden_size,
                  seq_len,
                  heads,
                  num_layers,
                  is_preln,
                  use_fp16,
                  atol):
    # Only run fp16 test cases on devices with 7+ capability.
    major, _ = torch.cuda.get_device_capability()
    if major < 7 and (use_fp16 is True or is_preln is False):
        return

    ds_config = DeepSpeedTransformerConfig()
    ds_config.layer_id = None
    ds_config.batch_size = batch_size
    ds_config.hidden_size = hidden_size
    ds_config.intermediate_size = hidden_size
    ds_config.heads = heads
    ds_config.attn_dropout_ratio = 0.0
    ds_config.hidden_dropout_ratio = 0.0
    ds_config.num_hidden_layers = num_layers
    ds_config.pre_layer_norm = is_preln
    ds_config.initializer_range = 0.02
    ds_config.fp16 = use_fp16
  run_backward(ds_config, seq_len, atol=atol, verbose=False)

tests/unit/test_cuda_backward.py:297:


tests/unit/test_cuda_backward.py:253: in run_backward
check_equal(base_grads, ds_grads, atol=atol, verbose=verbose)


first = [[tensor([[[-1.8193, 0.4900, -0.9331, ..., 0.5176, -0.6211, -0.6309],
[ 1.6182, -1.2148, -1.3623, ..., 1...0, -10.3750, -7.9297, ..., -10.5781, -18.3906, 19.5000],
device='cuda:0', dtype=torch.float16), 'N2_W'], ...]
second = [[tensor([[[-1.8193, 0.4900, -0.9331, ..., 0.5171, -0.6211, -0.6313],
[ 1.6191, -1.2148, -1.3623, ..., 1... 3.3945, 6.7188, ..., -13.4453, -4.6484, -1.4775],
device='cuda:0', dtype=torch.float16), 'norm_W'], ...]
atol = 0.05, verbose = False

def check_equal(first, second, atol=1e-2, verbose=False):
    diction_x = {}
    diction_y = {}

    if verbose:
        for i, (x, y) in enumerate(zip(first, second)):
            print(x[1], y[1])

    for i, (x, y) in enumerate(zip(first, second)):
        k = 0
        while (diction_x.get((k, x[1])) is not None):
            k = k + 1
        diction_x[k, x[1]] = x[0]
        k = 0
        while (diction_y.get((k, y[1])) is not None):
            k = k + 1
        diction_y[k, y[1]] = y[0]
    if verbose:
        print()
        for i, (x, y) in enumerate(zip(diction_x, diction_y)):
            print(x, y)

    for i, (x, y) in enumerate(zip(diction_x, diction_y)):
        if (x[0] == 1): continue
        if verbose:
            print("checking ", x[1], ":")
        y = diction_y[x[0], x[1]]
        x = diction_x[x[0], x[1]]
        x = x.cpu().detach().numpy()
        y = y.cpu().detach().numpy()
        if verbose:
            print(x)
            print(y)

        avgx = np.sum(abs(x), dtype=float)
        countx = x.shape[0]
        for i in range(len(x.shape) - 1):
            countx *= x.shape[i + 1]
            avgx = np.sum(avgx)
        tollerance = 1
        if avgx != float('inf') and avgx != -float('inf'):
            avgx = avgx / countx
            tollerance = avgx * atol
        if verbose:
            print("tollerance is ", tollerance)
            print("x = {}".format(x.flatten()))
            print("y = {}".format(y.flatten()))
            print('-' * 80)
      np.testing.assert_allclose(x, y, err_msg="Index: {}".format(i), atol=tollerance)

E AssertionError:
E Not equal to tolerance rtol=1e-07, atol=0.432773
E Index: 0
E Mismatched elements: 7 / 2560000 (0.000273%)
E Max absolute difference: 0.5117
E Max relative difference: 972.
E x: array([[-13.08 , -7.492 , -0.2009, ..., -10.91 , -1.674 , 2.598 ],
E [-23.44 , 3.715 , -0.958 , ..., 5.773 , 21.42 , 24.75 ],
E [-16.05 , -8.24 , -0.1768, ..., -10.69 , -4.707 , 5.668 ],...
E y: array([[-13.09 , -7.48 , -0.1837, ..., -10.91 , -1.645 , 2.615 ],
E [-23.4 , 3.715 , -0.968 , ..., 5.758 , 21.4 , 24.73 ],
E [-16.05 , -8.24 , -0.1718, ..., -10.69 , -4.723 , 5.66 ],...

tests/unit/test_cuda_backward.py:73: AssertionError
----------------------------- Captured stdout call -----------------------------
DeepSpeed Transformer config is {'layer_id': 6, 'batch_size': 8, 'hidden_size': 1600, 'intermediate_size': 1600, 'heads': 2, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 3, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #6 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 7, 'batch_size': 8, 'hidden_size': 1600, 'intermediate_size': 1600, 'heads': 2, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 3, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #7 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 8, 'batch_size': 8, 'hidden_size': 1600, 'intermediate_size': 1600, 'heads': 2, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 3, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #8 is created with date type [half].
_______________ test_backward[3-1024-119-16-24-True-False-0.05] ________________

batch_size = 3, hidden_size = 1024, seq_len = 119, heads = 16, num_layers = 24
is_preln = True, use_fp16 = False, atol = 0.05

@pytest.mark.parametrize('batch_size, hidden_size, seq_len, heads, num_layers, is_preln, use_fp16, atol',
                         [
                             (8,1600,128,25,3,True,True, 0.05),
                             (8,160,128,2,3,True,True, 0.1),
                             (8,1600,128,2,3,True,True, 0.05),
                             (3,1024,119,16,24,True,False, 0.05),
                             (3,1024,115,16,24,True,True, 0.05),
                             #(1024,128,10,2,2,False,False, 0.1),
                             #(3,1024,52,16,24,False,True, 0.2),
                             #(3,128,51,2,24,False,False, 0.1),
                             #(3,128,54,2,24,False,True, 0.2),
                         ]) # yapf: disable
def test_backward(batch_size,
                  hidden_size,
                  seq_len,
                  heads,
                  num_layers,
                  is_preln,
                  use_fp16,
                  atol):
    # Only run fp16 test cases on devices with 7+ capability.
    major, _ = torch.cuda.get_device_capability()
    if major < 7 and (use_fp16 is True or is_preln is False):
        return

    ds_config = DeepSpeedTransformerConfig()
    ds_config.layer_id = None
    ds_config.batch_size = batch_size
    ds_config.hidden_size = hidden_size
    ds_config.intermediate_size = hidden_size
    ds_config.heads = heads
    ds_config.attn_dropout_ratio = 0.0
    ds_config.hidden_dropout_ratio = 0.0
    ds_config.num_hidden_layers = num_layers
    ds_config.pre_layer_norm = is_preln
    ds_config.initializer_range = 0.02
    ds_config.fp16 = use_fp16
  run_backward(ds_config, seq_len, atol=atol, verbose=False)

tests/unit/test_cuda_backward.py:297:


tests/unit/test_cuda_backward.py:253: in run_backward
check_equal(base_grads, ds_grads, atol=atol, verbose=verbose)


first = [[tensor([[[-0.4278, 0.1847, 0.0466, ..., 0.1023, -0.1683, 0.0696],
[-0.1171, -0.2996, 0.2820, ..., 0...95e-01, -1.5747e-03, 6.1877e-01, ..., -1.4802e+00,
6.6085e-01, -2.3141e+00], device='cuda:0'), 'N2_W'], ...]
second = [[tensor([[[-0.4278, 0.1848, 0.0466, ..., 0.1023, -0.1683, 0.0696],
[-0.1171, -0.2997, 0.2820, ..., 0...orm_B'], [tensor([-3.5949, -0.3235, -0.7302, ..., 0.3123, -0.6005, 0.7288],
device='cuda:0'), 'norm_W'], ...]
atol = 0.05, verbose = False

def check_equal(first, second, atol=1e-2, verbose=False):
    diction_x = {}
    diction_y = {}

    if verbose:
        for i, (x, y) in enumerate(zip(first, second)):
            print(x[1], y[1])

    for i, (x, y) in enumerate(zip(first, second)):
        k = 0
        while (diction_x.get((k, x[1])) is not None):
            k = k + 1
        diction_x[k, x[1]] = x[0]
        k = 0
        while (diction_y.get((k, y[1])) is not None):
            k = k + 1
        diction_y[k, y[1]] = y[0]
    if verbose:
        print()
        for i, (x, y) in enumerate(zip(diction_x, diction_y)):
            print(x, y)

    for i, (x, y) in enumerate(zip(diction_x, diction_y)):
        if (x[0] == 1): continue
        if verbose:
            print("checking ", x[1], ":")
        y = diction_y[x[0], x[1]]
        x = diction_x[x[0], x[1]]
        x = x.cpu().detach().numpy()
        y = y.cpu().detach().numpy()
        if verbose:
            print(x)
            print(y)

        avgx = np.sum(abs(x), dtype=float)
        countx = x.shape[0]
        for i in range(len(x.shape) - 1):
            countx *= x.shape[i + 1]
            avgx = np.sum(avgx)
        tollerance = 1
        if avgx != float('inf') and avgx != -float('inf'):
            avgx = avgx / countx
            tollerance = avgx * atol
        if verbose:
            print("tollerance is ", tollerance)
            print("x = {}".format(x.flatten()))
            print("y = {}".format(y.flatten()))
            print('-' * 80)
      np.testing.assert_allclose(x, y, err_msg="Index: {}".format(i), atol=tollerance)

E AssertionError:
E Not equal to tolerance rtol=1e-07, atol=0.0945344
E Index: 0
E Mismatched elements: 24 / 1048576 (0.00229%)
E Max absolute difference: 0.14618826
E Max relative difference: 311.9142
E x: array([[ 3.202798, -0.936109, 2.034748, ..., -0.728867, -0.686356,
E -0.4015 ],
E [-0.128636, 0.634176, -0.778709, ..., -1.555946, 0.301913,...
E y: array([[ 3.202195, -0.939658, 2.035218, ..., -0.730295, -0.683705,
E -0.402174],
E [-0.129678, 0.634598, -0.778258, ..., -1.556513, 0.301758,...

tests/unit/test_cuda_backward.py:73: AssertionError
----------------------------- Captured stdout call -----------------------------
DeepSpeed Transformer config is {'layer_id': 9, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_inv
ertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #9 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 10, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #10 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 11, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #11 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 12, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #12 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 13, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #13 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 14, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #14 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 15, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #15 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 16, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #16 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 17, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #17 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 18, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #18 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 19, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #19 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 20, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #20 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 21, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #21 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 22, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'in
itializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #21 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 22, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #22 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 23, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #23 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 24, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #24 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 25, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #25 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 26, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #26 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 27, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #27 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 28, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #28 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 29, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #29 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 30, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #30 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 31, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #31 is created with date type [float].
DeepSpeed Transformer config is {'layer_id': 32, 'batch_size': 3, 'hidden_size': 1024, 'intermediate_size': 1024, 'heads': 16, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': False, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #32 is created with date type [float].
=============================== warnings summary ===============================
tests/unit/test_cuda_backward.py::test_backward[8-1600-128-25-3-True-True-0.05]
/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py:3: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp

-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ============================
FAILED tests/unit/test_cuda_backward.py::test_backward[8-1600-128-2-3-True-True-0.05]
FAILED tests/unit/test_cuda_backward.py::test_backward[3-1024-119-16-24-True-False-0.05]
============== 2 failed, 3 passed, 1 warning in 63.89s (0:01:03) ===============

@RezaYazdaniAminabadi
Copy link
Contributor

Hi @garvct

Sorry, the issue that the test could not run was that it was not among the unit tests. I have added it in this branch. Let's use this branch to solve some of these issues.
By the way, all unit tests pass in my case:

================================================================= warnings summary ==================================================================
tests/unit/test_cuda_backward.py::test_backward[4-2048-512-32-24-True-True-0.05]
  /opt/conda/lib/python3.6/site-packages/torch/utils/cpp_extension.py:3: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
    import imp

-- Docs: https://docs.pytest.org/en/stable/warnings.html
===================================================== 6 passed, 1 warning in 131.90s (0:02:11) ======================================================

I am wondering if this issue is related to the Torch version. I am using the following environment:

torch cuda version ............... 11.1
nvcc version ..................... 11.1
deepspeed install path ........... ['/home/reyazda/ds-inference/deepspeed']
deepspeed info ................... 0.4.1+cfe14f7, cfe14f7, reyazda/test-sparse
deepspeed wheel compiled w. ...... torch 1.8, cuda 11.1

Thanks,
Reza

@RezaYazdaniAminabadi
Copy link
Contributor

I am seeing you are using torch1.9+CUDA11.3. Is this a nightly version of Torch?
I will try with torch1.9 and see if I can repro this.

@garvct
Copy link
Author

garvct commented Jun 29, 2021

pytest tests/unit/test_cuda_backward.py::test_backward[4-2048-512-32-24-True-True-0.05]
============================= test session starts ==============================
platform linux -- Python 3.8.8, pytest-6.2.3, py-1.10.0, pluggy-0.13.1
rootdir: /workspace/nfs2/yandex/DeepSpeed
plugins: pythonpath-0.7.3, cov-2.11.1, hypothesis-4.50.8
collected 1 item

tests/unit/test_cuda_backward.py F [100%]

=================================== FAILURES ===================================
________________ test_backward[4-2048-512-32-24-True-True-0.05] ________________

batch_size = 4, hidden_size = 2048, seq_len = 512, heads = 32, num_layers = 24
is_preln = True, use_fp16 = True, atol = 0.05

@pytest.mark.parametrize('batch_size, hidden_size, seq_len, heads, num_layers, is_preln, use_fp16, atol',
                         [
                             (4,2048,512,32,24,True,True, 0.05),
                             (8,1600,128,25,3,True,True, 0.05),
                             (8,160,128,2,3,True,True, 0.1),
                             (8,1600,128,2,3,True,True, 0.05),
                             (3,1024,119,16,24,True,False, 0.05),
                             (3,1024,115,16,24,True,True, 0.05),
                             #(1024,128,10,2,2,False,False, 0.1),
                             #(3,1024,52,16,24,False,True, 0.2),
                             #(3,128,51,2,24,False,False, 0.1),
                             #(3,128,54,2,24,False,True, 0.2),
                         ]) # yapf: disable
def test_backward(batch_size,
                  hidden_size,
                  seq_len,
                  heads,
                  num_layers,
                  is_preln,
                  use_fp16,
                  atol):
    # Only run fp16 test cases on devices with 7+ capability.
    major, _ = torch.cuda.get_device_capability()
    if major < 7 and (use_fp16 is True or is_preln is False):
        return

    ds_config = DeepSpeedTransformerConfig()
    ds_config.layer_id = None
    ds_config.batch_size = batch_size
    ds_config.hidden_size = hidden_size
    ds_config.intermediate_size = hidden_size
    ds_config.heads = heads
    ds_config.attn_dropout_ratio = 0.0
    ds_config.hidden_dropout_ratio = 0.0
    ds_config.num_hidden_layers = num_layers
    ds_config.pre_layer_norm = is_preln
    ds_config.initializer_range = 0.02
    ds_config.fp16 = use_fp16
  run_backward(ds_config, seq_len, atol=atol, verbose=False)

tests/unit/test_cuda_backward.py:298:


tests/unit/test_cuda_backward.py:253: in run_backward
check_equal(base_grads, ds_grads, atol=atol, verbose=verbose)


first = [[tensor([[[ 0.2712, 0.0586, -0.1754, ..., -0.0591, -0.2847, -0.0381],
[ 0.0069, 0.1061, 0.0195, ..., -0...0.7329, 0.3718, 1.3848, ..., -2.8047, 0.3921, 1.5293],
device='cuda:0', dtype=torch.float16), 'N2_W'], ...]
second = [[tensor([[[ 0.2715, 0.0586, -0.1753, ..., -0.0591, -0.2849, -0.0380],
[ 0.0069, 0.1061, 0.0195, ..., -0...5181, -1.9365, -0.2527, ..., 0.1510, 6.9883, 0.7705],
device='cuda:0', dtype=torch.float16), 'norm_W'], ...]
atol = 0.05, verbose = False

def check_equal(first, second, atol=1e-2, verbose=False):
    diction_x = {}
    diction_y = {}

    if verbose:
        for i, (x, y) in enumerate(zip(first, second)):
            print(x[1], y[1])

    for i, (x, y) in enumerate(zip(first, second)):
        k = 0
        while (diction_x.get((k, x[1])) is not None):
            k = k + 1
        diction_x[k, x[1]] = x[0]
        k = 0
        while (diction_y.get((k, y[1])) is not None):
            k = k + 1
        diction_y[k, y[1]] = y[0]
    if verbose:
        print()
        for i, (x, y) in enumerate(zip(diction_x, diction_y)):
            print(x, y)

    for i, (x, y) in enumerate(zip(diction_x, diction_y)):
        if (x[0] == 1): continue
        if verbose:
            print("checking ", x[1], ":")
        y = diction_y[x[0], x[1]]
        x = diction_x[x[0], x[1]]
        x = x.cpu().detach().numpy()
        y = y.cpu().detach().numpy()
        if verbose:
            print(x)
            print(y)

        avgx = np.sum(abs(x), dtype=float)
        countx = x.shape[0]
        for i in range(len(x.shape) - 1):
            countx *= x.shape[i + 1]
            avgx = np.sum(avgx)
        tollerance = 1
        if avgx != float('inf') and avgx != -float('inf'):
            avgx = avgx / countx
            tollerance = avgx * atol
        if verbose:
            print("tollerance is ", tollerance)
            print("x = {}".format(x.flatten()))
            print("y = {}".format(y.flatten()))
            print('-' * 80)
      np.testing.assert_allclose(x, y, err_msg="Index: {}".format(i), atol=tollerance)

E AssertionError:
E Not equal to tolerance rtol=1e-07, atol=0.198958
E Index: 0
E Mismatched elements: 1 / 4194304 (2.38e-05%)
E Max absolute difference: 0.2031
E Max relative difference: 19660.
E x: array([[-13.18 , -9.836 , 1.978 , ..., -0.272 , -0.2144 ,
E -7.516 ],
E [ 5.914 , 4.492 , 0.3672 , ..., -1.485 , -0.8804 ,...
E y: array([[-13.18 , -9.83 , 1.981 , ..., -0.2703, -0.2114, -7.508 ],
E [ 5.92 , 4.496 , 0.368 , ..., -1.487 , -0.8804, 2.041 ],
E [ 6.96 , 5.32 , 0.0904, ..., -2.182 , -2.43 , 1.734 ],...

tests/unit/test_cuda_backward.py:73: AssertionError
----------------------------- Captured stdout call -----------------------------
DeepSpeed Transformer config is {'layer_id': 0, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
Using /root/.cache/torch_extensions as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/transformer/build.ninja...
Building extension module transformer...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module transformer...
Time to load transformer op: 0.11982226371765137 seconds
layer #0 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 1, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #1 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 2, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #2 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 3, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #3 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 4, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #4 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 5, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #5 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 6, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_inve
layer #5 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 6, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #6 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 7, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #7 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 8, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #8 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 9, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #9 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 10, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #10 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 11, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #11 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 12, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #12 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 13, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #13 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 14, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #14 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 15, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #15 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 16, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #16 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 17, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #17 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 18, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps
layer #17 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 18, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #18 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 19, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #19 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 20, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #20 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 21, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #21 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 22, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #22 is created with date type [half].
DeepSpeed Transformer config is {'layer_id': 23, 'batch_size': 4, 'hidden_size': 2048, 'intermediate_size': 2048, 'heads': 32, 'attn_dropout_ratio': 0.0, 'hidden_dropout_ratio': 0.0, 'num_hidden_layers': 24, 'initializer_range': 0.02, 'fp16': True, 'pre_layer_norm': True, 'local_rank': -1, 'seed': -1, 'normalize_invertible': False, 'gelu_checkpoint': False, 'adjust_init_range': True, 'test_gemm': False, 'layer_norm_eps': 1e-12, 'training': True, 'is_grad_enabled': True, 'attn_dropout_checkpoint': False, 'stochastic_mode': False, 'huggingface': False}
layer #23 is created with date type [half].
=============================== warnings summary ===============================
tests/unit/test_cuda_backward.py::test_backward[4-2048-512-32-24-True-True-0.05]
/opt/conda/lib/python3.8/site-packages/torch/utils/cpp_extension.py:3: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp

-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ============================
FAILED tests/unit/test_cuda_backward.py::test_backward[4-2048-512-32-24-True-True-0.05]
======================== 1 failed, 1 warning in 53.41s =========================

I am using a modified nvidia pytorch 21.05-py3 container.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants