Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: 'NoneType' object is not callable #3

Closed
binghuang21 opened this issue Dec 7, 2021 · 2 comments
Closed

TypeError: 'NoneType' object is not callable #3

binghuang21 opened this issue Dec 7, 2021 · 2 comments

Comments

@binghuang21
Copy link

Hello,according to the py3_requirements.txt, I set up the pytorch-1.9.1 environment. But when I tried to run train.py, it returned TypeError. The details are as follows. If you can give me some suggestions, I would be very grateful.
name:
args Namespace(batch_size=2, epochs=200, gpus=None, lr=0.0001, name='', path=None, train_data='../dataset/lpd_5_prcem_mix_v8_10000.npz')
num of encoder classes: [ 18 3 18 129 18 6 20 102 4865] [1 1 1]
D_MODEL 512 N_LAYER 12 N_HEAD 8 DECODER ATTN causal-linear

: [ 18 3 18 129 18 6 20 102 4865]
DEVICE COUNT: 2
VISIBLE: 0,1
n_parameters: 39,005,620
train_data: dataset
batch_size: 2
num_batch: 1519
train_x: (3039, 9999, 9)
train_y: (3039, 9999, 9)
train_mask: (3039, 9999)
lr_init: 0.0001
DECAY_EPOCH: []
DECAY_RATIO: 0.1
Traceback (most recent call last):
File "train.py", line 219, in
train_dp()
File "train.py", line 162, in train_dp
losses = net(is_train=True, x=batch_x, target=batch_y, loss_mask=batch_mask, init_token=batch_init)
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bing/CODE/video-bgm-generation/src/model.py", line 482, in forward
return self.train_forward(**kwargs)
File "/home/bing/CODE/video-bgm-generation/src/model.py", line 450, in train_forward
h, y_type = self.forward_hidden(x, memory=None, is_training=True, init_token=init_token)
File "/home/bing/CODE/video-bgm-generation/src/model.py", line 221, in forward_hidden
encoder_hidden = self.transformer_encoder(encoder_pos_emb, attn_mask)
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/fast_transformers/transformers.py", line 138, in forward
x = layer(x, attn_mask=attn_mask, length_mask=length_mask)
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/fast_transformers/transformers.py", line 81, in forward
key_lengths=length_mask
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/fast_transformers/attention/attention_layer.py", line 109, in forward
key_lengths
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/fast_transformers/attention/causal_linear_attention.py", line 101, in forward
values
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/fast_transformers/attention/causal_linear_attention.py", line 23, in causal_linear
V_new = causal_dot_product(Q, K, V)
File "/home/bing/anaconda3/envs/torch-1.9/lib/python3.7/site-packages/fast_transformers/causal_product/init.py", line 48, in forward
product
TypeError: 'NoneType' object is not callable

Looking forward to your reply!

@wzk1015
Copy link
Owner

wzk1015 commented Dec 7, 2021

It may be because your CUDA version and fast_transformers version are not compatible. You can refer to this issue of fast_transformers' repo

@wzk1015
Copy link
Owner

wzk1015 commented Dec 7, 2021

Downgrading torch (e.g. 1.7.0) or fast_transformers may also work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants