-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Given groups=1, weight of size [48, 37, 11], expected input[8, 691, 18] to have 37 channels, but got 691 channels instead #61
Comments
Hi, as you can see your input vector has shape I suggest double checking your data processing function, and to concatenate |
thank you for your calrification, but I am still not quite sure how to address it, this how the function is converting the dataset from csv to npz :
my input and output after reading the csv files look like the following :
Could you please point out for me or edit the code directly and thank you again for sharing the code and supporting in troubleshouting |
This seems good to me, so the problem probably isn't from the |
As I stated before my npz dataset dimension looks like this 👍
the benchmark notebook that I am trying to use looks like this (it is taken from your repo on gihub) and the dataloader specifically looks like this:
The error that I am getting is the following:
This error is giving me hard times, since I tried several transformation before , but since you confirmed the same input and output , how we can make this work , by the way I tried the original benchmark using the csv directly and it worked , the code looks like this :
Thank you for debugging this with me , my goal is to re-run your experiment so I can build my own transformer in the end, so understanding your experiment will help me a lot. Thank you |
Indeed there is an issue with the dimensions. @maxjcohen in the ozedataset class, R and Z are concatenated on the wrong dimension. I suggest changing
to
in |
Hi, there may be inconsistance between this repo and the competition's dataset, as the latter hasn't been maintained since 2021 while this project kept being updated. The preprocessing function remains correct, however you may need to adapt it to your dataset. In all cases, we want to concatenate of the feature dimension ( |
I have the following setup and I actually changed the d_input from 38 to 37
`# Training parameters
DATASET_PATH = 'output.npz'
BATCH_SIZE = 8
NUM_WORKERS = 4
LR = 1e-4
EPOCHS = 30
Model parameters
d_model = 48 # Lattent dim
N = 2 # Number of layers
dropout = 0.2 # Dropout rate
d_input = 37 # From dataset
d_output = 8 # From dataset
Config
sns.set()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(f"Using device {device}")
`
My dataset has the following shape when converting the ozedataset from csv to npz :
[('R', (7500, 19), dtype('float32')), ('X', (7500, 8, 672), dtype('float32')), ('Z', (7500, 18, 672), dtype('float32'))]
but when I am running the benchmark of the transformer repo I am getting the following error when I train :
`[Epoch 1/30]: 0%| | 0/5500 [00:00<?, ?it/s]
torch.Size([8, 18, 691])
torch.Size([8, 8, 672])
RuntimeError Traceback (most recent call last)
Cell In[6], line 16
14 print(x.shape)
15 print(y.shape)
---> 16 netout = net(x.to(device))
18 # Comupte loss
19 loss = loss_function(y.to(device), netout)
File ~/miniconda3/envs/Test/lib/python3.10/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
File ~/Implementations/Transformers/OzeChallenge/Original/transformer/src/benchmark.py:121, in ConvGru.forward(self, x)
119 def forward(self, x):
120 x = x.transpose(1, 2)
--> 121 x = self.conv1(x)
122 x = self.activation(x)
123 x = self.conv2(x)
File ~/miniconda3/envs/Test/lib/python3.10/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []
File ~/miniconda3/envs/Test/lib/python3.10/site-packages/torch/nn/modules/conv.py:313, in Conv1d.forward(self, input)
312 def forward(self, input: Tensor) -> Tensor:
--> 313 return self._conv_forward(input, self.weight, self.bias)
File ~/miniconda3/envs/Test/lib/python3.10/site-packages/torch/nn/modules/conv.py:309, in Conv1d._conv_forward(self, input, weight, bias)
305 if self.padding_mode != 'zeros':
306 return F.conv1d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
307 weight, bias, self.stride,
308 _single(0), self.dilation, self.groups)
--> 309 return F.conv1d(input, weight, bias, self.stride,
310 self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [48, 37, 11], expected input[8, 691, 18] to have 37 channels, but got 691 channels instead`
I know I have to use rollaxis to get my input in the following shape :
_x.shape = torch.Size([7500, 672, 37]) _y.shape = torch.Size([7500, 672, 8])
could you please help me with it , I am a bit confused !!
Thank you in advance
The text was updated successfully, but these errors were encountered: