Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When training with on your own dataset,The size of tensor a (76800) must match the size of tensor b (32768) at non-singleton dimension 1 #48

Open
Ohwang123 opened this issue Jul 18, 2023 · 4 comments

Comments

@Ohwang123
Copy link

When training with on your own dataset, the following error occurs:unetr_pp_trainer_synapse
Traceback (most recent call last):
File "C:\Project\unetr_pp\unetr_pp\run\run_training.py", line 169, in
main()
File "C:\Project\unetr_pp\unetr_pp\run\run_training.py", line 153, in main
trainer.run_training()
File "C:\Project\unetr_pp\unetr_pp\training\network_training\unetr_pp_trainer_synapse.py", line 472, in run_training
ret = super().run_training()
File "C:\Project\unetr_pp\unetr_pp\training\network_training\Trainer_synapse.py", line 320, in run_training
super(Trainer_synapse, self).run_training()
File "C:\Project\unetr_pp\unetr_pp\training\network_training\network_trainer_synapse.py", line 482, in run_training
l = self.run_iteration(self.tr_gen, True)
File "C:\Project\unetr_pp\unetr_pp\training\network_training\unetr_pp_trainer_synapse.py", line 285, in run_iteration
output = self.network(data)
File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Project\unetr_pp\unetr_pp\network_architecture\synapse\unetr_pp_synapse.py", line 134, in forward
x_output, hidden_states = self.unetr_pp_encoder(x_in)
File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Project\unetr_pp\unetr_pp\network_architecture\synapse\model_components.py", line 69, in forward
x, hidden_states = self.forward_features(x)
File "C:\Project\unetr_pp\unetr_pp\network_architecture\synapse\model_components.py", line 56, in forward_features
x = self.stages0
File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\container.py", line 141, in forward
input = module(input)
File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Project\unetr_pp\unetr_pp\network_architecture\synapse\transformerblock.py", line 58, in forward
x = x + self.pos_embed
RuntimeError: The size of tensor a (76800) must match the size of tensor b (32768) at non-singleton dimension 1

I am running the following nnFormer: 3d_fullres
My trainer class is: <class 'unetr_pp.training.network_training.unetr_pp_trainer_synapse.unetr_pp_trainer_synapse'>
For that I will be using the following configuration:
num_classes: 1
modalities: {0: 'CT'}
use_mask_for_norm OrderedDict([(0, False)])
keep_only_largest_region None
min_region_size_per_class None
min_size_per_class None
normalization_schemes OrderedDict([(0, 'CT')])
stages...

stage: 0
{'batch_size': 2, 'num_pool_per_axis': [4, 5, 5], 'patch_size': array([ 96, 160, 160]), 'median_patient_size_in_voxels': array([148, 258, 258]), 'current_spacing': array([1.98689442, 1.41643912, 1.41643912]), 'original_spacing': array([1. , 0.71289098, 0.71289098]), 'do_dummy_2D_data_aug': False, 'pool_op_kernel_sizes': [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [1, 2, 2]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}

stage: 1
{'batch_size': 2, 'num_pool_per_axis': [4, 5, 5], 'patch_size': array([ 96, 160, 160]), 'median_patient_size_in_voxels': array([294, 512, 512]), 'current_spacing': array([1. , 0.71289098, 0.71289098]), 'original_spacing': array([1. , 0.71289098, 0.71289098]), 'do_dummy_2D_data_aug': False, 'pool_op_kernel_sizes': [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [1, 2, 2]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}

pleas!! how to solve this problem?

@Amshaker
Copy link
Owner

Amshaker commented Aug 2, 2023

Could you please print the shape of the data?
The patches of Synapse should be in the shape of 128 × 128 × 64. Please review our paper to see the correct input size for each dataset.

@Ohwang123
Copy link
Author

Ohwang123 commented Aug 17, 2023

hey,Amshaker
my own data is:
num_classes: 1
modalities: {0: 'CT'}
stage: 0
{'batch_size': 2, 'num_pool_per_axis': [4, 5, 5], 'patch_size': array([ 96, 160, 160]), 'median_patient_size_in_voxels': array([167, 240, 240]), 'current_spacing': array([2.13021975, 1.49781076, 1.49781076]), 'original_spacing': array([1. , 0.703125, 0.703125]), 'do_dummy_2D_data_aug': False, 'pool_op_kernel_sizes': [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [1, 2, 2]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}

stage: 1
{'batch_size': 2, 'num_pool_per_axis': [4, 5, 5], 'patch_size': array([ 96, 160, 160]), 'median_patient_size_in_voxels': array([356, 512, 512]), 'current_spacing': array([1. , 0.703125, 0.703125]), 'original_spacing': array([1. , 0.703125, 0.703125]), 'do_dummy_2D_data_aug': False, 'pool_op_kernel_sizes': [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [1, 2, 2]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}

my enviroment is Windows,python==3.8,use 3d_fullres unetr_pp_trainer_synapse 1 4

This is shape of data:

def forward(self, x_in):
        print(f"x_in.shape------>{x_in.shape}")
        x_output, hidden_states = self.unetr_pp_encoder(x_in)
        print(f"x_out.shape----->{x_output.shape}")
        convBlock = self.encoder1(x_in)

        # Four encoders
        enc1 = hidden_states[0]
        enc2 = hidden_states[1]
        enc3 = hidden_states[2]
        enc4 = hidden_states[3]

x_in.shape------>torch.Size([1, 1, 64, 128, 128])
UnetrPPEncodertorch.Size([1, 1, 64, 128, 128])
x_out.shape----->torch.Size([1, 64, 256])
epoch: 0
x_in.shape------>torch.Size([2, 1, 96, 160, 160])
UnetrPPEncodertorch.Size([2, 1, 96, 160, 160])

Then make a mistake:

Traceback (most recent call last):
File "C:\Project\unetr_plus_plus-main\unetr_pp\run\run_training.py", line 169, in
main()
File "C:\Project\unetr_plus_plus-main\unetr_pp\run\run_training.py", line 153, in main
trainer.run_training()
File "C:\Project\unetr_plus_plus-main\unetr_pp\training\network_training\unetr_pp_trainer_synapse.py", line 474, in run_training
ret = super().run_training()
File "C:\Project\unetr_plus_plus-main\unetr_pp\training\network_training\Trainer_synapse.py", line 320, in run_training
super(Trainer_synapse, self).run_training()
File "C:\Project\unetr_plus_plus-main\unetr_pp\training\network_training\network_trainer_synapse.py", line 481, in run_training
l = self.run_iteration(self.tr_gen, True)
File "C:\Project\unetr_plus_plus-main\unetr_pp\training\network_training\unetr_pp_trainer_synapse.py", line 284, in run_iteration
output = self.network(data)
File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Project\unetr_plus_plus-main\unetr_pp\network_architecture\synapse\unetr_pp_synapse.py", line 135, in forward
x_output, hidden_states = self.unetr_pp_encoder(x_in)
File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Project\unetr_plus_plus-main\unetr_pp\network_architecture\synapse\model_components.py", line 70, in forward
x, hidden_states = self.forward_features(x)
File "C:\Project\unetr_plus_plus-main\unetr_pp\network_architecture\synapse\model_components.py", line 56, in forward_features
x = self.stages0
File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\container.py", line 141, in forward
input = module(input)
File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Project\unetr_plus_plus-main\unetr_pp\network_architecture\synapse\transformerblock.py", line 57, in forward
x = x + self.pos_embed
RuntimeError: The size of tensor a (76800) must match the size of tensor b (32768) at non-singleton dimension 1

Process finished with exit code 1

The error also occurred before:

C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\functional.py:2498: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
verify_batch_size([input.size(0) * input.size(1) // num_groups, num_groups] + list(input.size()[2:]))
C:\Project\unetr_plus_plus-main\unetr_pp\network_architecture\synapse\transformerblock.py:95: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
qkvv = self.qkvv(x).reshape(B, N, 4, self.num_heads, C // self.num_heads)
C:\Monai\envs\unetr_pp\lib\site-packages\einops\einops.py:204: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
inferred_length: int = length // known_product
Unsupported operator aten::mul encountered 108 time(s)
Unsupported operator aten::add encountered 67 time(s)
Unsupported operator aten::div encountered 68 time(s)
Unsupported operator aten::norm encountered 42 time(s)
Unsupported operator aten::clamp_min encountered 42 time(s)
Unsupported operator aten::expand_as encountered 42 time(s)
Unsupported operator aten::softmax encountered 42 time(s)
Unsupported operator aten::add
encountered 65 time(s)
Unsupported operator aten::leaky_relu_ encountered 46 time(s)
Unsupported operator aten::feature_dropout encountered 21 time(s)
Unsupported operator aten::mul_ encountered 2 time(s)

@LimxRabbit
Copy link

hey,Amshaker my own data is: num_classes: 1 modalities: {0: 'CT'} stage: 0 {'batch_size': 2, 'num_pool_per_axis': [4, 5, 5], 'patch_size': array([ 96, 160, 160]), 'median_patient_size_in_voxels': array([167, 240, 240]), 'current_spacing': array([2.13021975, 1.49781076, 1.49781076]), 'original_spacing': array([1. , 0.703125, 0.703125]), 'do_dummy_2D_data_aug': False, 'pool_op_kernel_sizes': [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [1, 2, 2]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}

stage: 1 {'batch_size': 2, 'num_pool_per_axis': [4, 5, 5], 'patch_size': array([ 96, 160, 160]), 'median_patient_size_in_voxels': array([356, 512, 512]), 'current_spacing': array([1. , 0.703125, 0.703125]), 'original_spacing': array([1. , 0.703125, 0.703125]), 'do_dummy_2D_data_aug': False, 'pool_op_kernel_sizes': [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [1, 2, 2]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}

my enviroment is Windows,python==3.8,use 3d_fullres unetr_pp_trainer_synapse 1 4

This is shape of data:

def forward(self, x_in):
        print(f"x_in.shape------>{x_in.shape}")
        x_output, hidden_states = self.unetr_pp_encoder(x_in)
        print(f"x_out.shape----->{x_output.shape}")
        convBlock = self.encoder1(x_in)

        # Four encoders
        enc1 = hidden_states[0]
        enc2 = hidden_states[1]
        enc3 = hidden_states[2]
        enc4 = hidden_states[3]

x_in.shape------>torch.Size([1, 1, 64, 128, 128]) UnetrPPEncodertorch.Size([1, 1, 64, 128, 128]) x_out.shape----->torch.Size([1, 64, 256]) epoch: 0 x_in.shape------>torch.Size([2, 1, 96, 160, 160]) UnetrPPEncodertorch.Size([2, 1, 96, 160, 160])

Then make a mistake:

Traceback (most recent call last): File "C:\Project\unetr_plus_plus-main\unetr_pp\run\run_training.py", line 169, in main() File "C:\Project\unetr_plus_plus-main\unetr_pp\run\run_training.py", line 153, in main trainer.run_training() File "C:\Project\unetr_plus_plus-main\unetr_pp\training\network_training\unetr_pp_trainer_synapse.py", line 474, in run_training ret = super().run_training() File "C:\Project\unetr_plus_plus-main\unetr_pp\training\network_training\Trainer_synapse.py", line 320, in run_training super(Trainer_synapse, self).run_training() File "C:\Project\unetr_plus_plus-main\unetr_pp\training\network_training\network_trainer_synapse.py", line 481, in run_training l = self.run_iteration(self.tr_gen, True) File "C:\Project\unetr_plus_plus-main\unetr_pp\training\network_training\unetr_pp_trainer_synapse.py", line 284, in run_iteration output = self.network(data) File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Project\unetr_plus_plus-main\unetr_pp\network_architecture\synapse\unetr_pp_synapse.py", line 135, in forward x_output, hidden_states = self.unetr_pp_encoder(x_in) File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Project\unetr_plus_plus-main\unetr_pp\network_architecture\synapse\model_components.py", line 70, in forward x, hidden_states = self.forward_features(x) File "C:\Project\unetr_plus_plus-main\unetr_pp\network_architecture\synapse\model_components.py", line 56, in forward_features x = self.stages0 File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\container.py", line 141, in forward input = module(input) File "C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "C:\Project\unetr_plus_plus-main\unetr_pp\network_architecture\synapse\transformerblock.py", line 57, in forward x = x + self.pos_embed RuntimeError: The size of tensor a (76800) must match the size of tensor b (32768) at non-singleton dimension 1

Process finished with exit code 1

The error also occurred before:

C:\Monai\envs\unetr_pp\lib\site-packages\torch\nn\functional.py:2498: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). verify_batch_size([input.size(0) * input.size(1) // num_groups, num_groups] + list(input.size()[2:])) C:\Project\unetr_plus_plus-main\unetr_pp\network_architecture\synapse\transformerblock.py:95: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). qkvv = self.qkvv(x).reshape(B, N, 4, self.num_heads, C // self.num_heads) C:\Monai\envs\unetr_pp\lib\site-packages\einops\einops.py:204: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). inferred_length: int = length // known_product Unsupported operator aten::mul encountered 108 time(s) Unsupported operator aten::add encountered 67 time(s) Unsupported operator aten::div encountered 68 time(s) Unsupported operator aten::norm encountered 42 time(s) Unsupported operator aten::clamp_min encountered 42 time(s) Unsupported operator aten::expand_as encountered 42 time(s) Unsupported operator aten::softmax encountered 42 time(s) Unsupported operator aten::add encountered 65 time(s) Unsupported operator aten::leaky_relu_ encountered 46 time(s) Unsupported operator aten::feature_dropout encountered 21 time(s) Unsupported operator aten::mul_ encountered 2 time(s)

i've met the same problem, do you solve it?

@qyuiiii123
Copy link

you have the wrong patch_size. It should be (64, 128, 128) for synapse, but yours data_shape is ([ 96, 160, 160]). you need to open the plan_file and change it.76800 = (96/2)(160/4)(160/4) and 32768 = (64/2)(128/2)(128/2), which you can in the file /.../unetr_plus_plus-main/unetr_pp/network_architecture/synapse/unetr_pp_synapse.py
class UnetrPPEncoder(nn.Module): def __init__(self, input_size=[32 * 32 * 32, 16 * 16 * 16, 8 * 8 * 8, 4 * 4 * 4],dims=[32, 64, 128, 256], proj_size =[64,64,64,32], depths=[3, 3, 3, 3], num_heads=4, spatial_dims=3, in_channels=1, dropout=0.0, transformer_dropout_rate=0.15 ,**kwargs): super().__init__()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants