New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dimension out of range (expected to be in range of [-1, 0], but got 1) #5554
Comments
@zou3519 is right, but let’s keep this issue open until we fix the error message |
Thanks @zou3519, |
@pskanade you have two labels, yes? Then your output should be of size |
Fixes pytorch#5554 Adds an error message for when NLLLoss is passed an input and target whose batch sizes don't match. Ideally this check should live in ATen but since there is NLLLoss logic in python the check is there right now.
Fixes #5554 Adds an error message for when NLLLoss is passed an input and target whose batch sizes don't match. Ideally this check should live in ATen but since there is NLLLoss logic in python the check is there right now.
Just curious: Is this change pushed in the latest pyTorch version? I seem to be getting the same error message. Am on pyTorch 0.4.0 |
@mithunpaul08 this should be in 0.4. Could you post a code sample that gives the error message? |
@zou3519 pasted below is the code. I am also printing the size of the tensors below. But I get error in the loss line.
Error:
Sizes of tensors:
|
The input to NLLLoss should be of size (N, C), while the target is of size (C,). That being said, I can't reproduce your error message on 0.4. This it what happens when I use tensors of the same size as your tensors:
|
ok, i figured it out. So the key point which i didn't understand from the documentation was that the TARGET should be just one entry saying which class it belongs to Eg:[2] instead of a one hot vector like [0,0,1]. I mean, frankly, I imagined the input and target being of similar shape (which is more intuitive). i.e Now that the input is a vector of [0,0,1] i imagined the TARGET also should be in same shape. Anyway, am glad it worked out. I would love to have a better worded documentation though, imho. Below is the code/shapes without dimension error.
|
@mithunpaul08 please feel free to send a PR to improve the wording of the documentation! |
Pretty stupid but, was trying to edit documentation but couldn't find it. Isn't the github page for the documentation of nn.html (which inturn contains torch.nn.CrossEntropyLoss) kept here Wasn't able to understand the layout. Will you kindly be able to point me to the right page so that I can edit it and submit a pull request? If its too much pain, ignore please. |
If you edit the docstring for nn.NLLLoss here, the changes will be reflected at our documentation website. You can preview a copy on your computer by cd-ing into |
…ss as sugested by developers in issue pytorch#5554
@zou3519 have created a pull request. pending travis checks. Please approve if it looks ok. Sorry it took this long. |
Hello,i met this same porblem in pytorch1.0.1.
|
IndexError Traceback (most recent call last) in sample_and_plot(im_id) ~\OneDrive\Desktop\image_caption_pytorch\solver.py in sample(self, features, max_length, b_size, model_mode, search_mode) IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) |
Hi @mithunpaul08 , i am working on a project related to person re-identification. I am trying to re-implement that code of one of the CVPR paper entitled "ABD-Net: Attentive but Diverse Person Re-Identification". I trained the ABD-Net architecture with resnet and densenet but when i am trying to train the architecture using shufflenet backbone I face this error. Could you please help me.. =================================================================
|
Just change the image file and the text data so that the number of images
and text corresponds to each other.
Dinesh
…On Wed, 14 Oct 2020, 06:26 Hayat ullah, ***@***.***> wrote:
Hi @mithunpaul08 <https://github.com/mithunpaul08> , i am working on
project related to person re-identification i am trying to re-implement
that code of one of the CVPR paper entitled "ABD-Net: Attentive but Diverse
Person Re-Identification". I trained the ABD-Net architecture with resnet
and densenet but when i am trying to train the architecture using
shufflenet backbone I face this error. Could you please help me..
=================================================================
File "train.py", line 147, in main
train(epoch, model, criterion, regularizer, optimizer, trainloader,
use_gpu, fixbase=True)
File "train.py", line 246, in train
loss = criterion(outputs, pids)
File
"C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py",
line 493, in *call*
result = self.forward(*input, **kwargs)
File "F:\Sami ullah work\Attention Code\paper 2_new
code\torchreid\losses\cross_entropy_loss.py", line 56, in forward
return self._forward(inputs[1], targets)
File "F:\Sami ullah work\Attention Code\paper 2_new
code\torchreid\losses\cross_entropy_loss.py", line 52, in _forward
return sum([self.apply_loss(x, targets) for x in inputs_tuple]) /
len(inputs_tuple)
File "F:\Sami ullah work\Attention Code\paper 2_new
code\torchreid\losses\cross_entropy_loss.py", line 52, in
return sum([self.apply_loss(x, targets) for x in inputs_tuple]) /
len(inputs_tuple)
File "F:\Sami ullah work\Attention Code\paper 2_new
code\torchreid\losses\cross_entropy_loss.py", line 32, in apply_loss
log_probs = self.logsoftmax(inputs)
File
"C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py",
line 493, in *call*
result = self.forward(*input, **kwargs)
File
"C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\activation.py",
line 1179, in forward
return F.log_softmax(input, self.dim, _stacklevel=5)
File
"C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\functional.py",
line 1350, in log_softmax
ret = input.log_softmax(dim)
IndexError: Dimension out of range (expected to be in range of [-1, 0],
but got 1)
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#5554 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGTKXHO43MAH25YJI3LUGFDSKUSBBANCNFSM4ETNTY2A>
.
|
Could you please explain in a bit detail.. thank you |
I think you have downloaded the dataset whose dimension vary in size. That
is the reason it is giving you dimension out of range. So before training a
dataset, make sure the dataset you choose for training I.e the image set
and the test dataset is of correct size. Tor example one single image
contain 5 different dataset. So make sure each image have five different
data set for training.
…On Wed, 14 Oct 2020, 10:23 Hayat ullah, ***@***.***> wrote:
Just change the image file and the text data so that the number of images
and text corresponds to each other. Dinesh
… <#m_1705133059265813072_>
On Wed, 14 Oct 2020, 06:26 Hayat ullah, *@*.***> wrote: Hi @mithunpaul08
<https://github.com/mithunpaul08> https://github.com/mithunpaul08 , i am
working on project related to person re-identification i am trying to
re-implement that code of one of the CVPR paper entitled "ABD-Net:
Attentive but Diverse Person Re-Identification". I trained the ABD-Net
architecture with resnet and densenet but when i am trying to train the
architecture using shufflenet backbone I face this error. Could you please
help me.. =================================================================
File "train.py", line 147, in main train(epoch, model, criterion,
regularizer, optimizer, trainloader, use_gpu, fixbase=True) File
"train.py", line 246, in train loss = criterion(outputs, pids) File
"C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py",
line 493, in *call* result = self.forward(*input, **kwargs) File "F:\Sami
ullah work\Attention Code\paper 2_new
code\torchreid\losses\cross_entropy_loss.py", line 56, in forward return
self._forward(inputs[1], targets) File "F:\Sami ullah work\Attention
Code\paper 2_new code\torchreid\losses\cross_entropy_loss.py", line 52, in
_forward return sum([self.apply_loss(x, targets) for x in inputs_tuple]) /
len(inputs_tuple) File "F:\Sami ullah work\Attention Code\paper 2_new
code\torchreid\losses\cross_entropy_loss.py", line 52, in return
sum([self.apply_loss(x, targets) for x in inputs_tuple]) /
len(inputs_tuple) File "F:\Sami ullah work\Attention Code\paper 2_new
code\torchreid\losses\cross_entropy_loss.py", line 32, in apply_loss
log_probs = self.logsoftmax(inputs) File
"C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\module.py",
line 493, in *call* result = self.forward(*input, **kwargs) File
"C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\activation.py",
line 1179, in forward return F.log_softmax(input, self.dim, _stacklevel=5)
File
"C:\Users\Hayat\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\functional.py",
line 1350, in log_softmax ret = input.log_softmax(dim) IndexError:
Dimension out of range (expected to be in range of [-1, 0], but got 1) —
You are receiving this because you commented. Reply to this email directly,
view it on GitHub <#5554 (comment)
<#5554 (comment)>>,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AGTKXHO43MAH25YJI3LUGFDSKUSBBANCNFSM4ETNTY2A
.
Could you please explain in a bit detail.. thank you
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#5554 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGTKXHKTOMBD6H7V3BRVLLLSKVNYTANCNFSM4ETNTY2A>
.
|
Class Dataset
Class Classifier
Training Loop
Error
|
Still the problem is same. Choose the correct dataset having equal text set
and image set.
…On Fri, 25 Dec 2020, 16:18 Bhavishya Pandit, ***@***.***> wrote:
*Class Dataset*
class dataset(Dataset):
def __init__(self):
self.tf=TfidfVectorizer(max_df=0.99, min_df=0.005)
self.x=self.tf.fit_transform(corpus).toarray()
self.y=list(df.review)
self.x_train,self.x_test,self.y_train,self.y_test=train_test_split(self.x,self.y,test_size=0.2)
self.token2idx=self.tf.vocabulary_
self.idx2token = {idx: token for token, idx in self.token2idx.items()}
print(self.idx2token)
def __getitem__(self,i):
return self.x_train[i, :], self.y_train[i]
def __len__(self):
return self.x_train.shape[0]
*Class Classifier*
class classifier(nn.Module):
def __init__(self,vocab_size,hidden1,hidden2):
super(classifier,self).__init__()
self.fc1=nn.Linear(vocab_size,hidden1)
self.fc2=nn.Linear(hidden1,hidden2)
self.fc3=nn.Linear(hidden2,1)
def forward(self,inputs):
x=F.relu(self.fc1(inputs.squeeze(1).float()))
x=F.relu(self.fc2(x))
return self.fc3(x)
Training Loop
epochs=10
total=0
model.train()
for epoch in tqdm(range(epochs)):
progress_bar=tqdm_notebook(train_loader,leave=False)
losses=[]
correct=0
for inputs,target in progress_bar:
model.zero_grad()
output=model(inputs)
print(output.squeeze().shape)
print(target.shape)
loss=criterion(output.squeeze(),target.float())
loss.backward()
nn.utils.clip_grad_norm_(model.parameters(), 3)
optim.step()
correct += (output == target).float().sum()
progress_bar.set_description(f'Loss: {loss.item():.3f}')
losses.append(loss.item())
total += 1
epoch_loss = sum(losses) / total
train_losses.append(epoch_loss)
tqdm.write(f'Epoch #{epoch + 1}\tTrain Loss: {epoch_loss:.3f}\tAccuracy: {correct/output.shape[0]}')
*Error*
IndexError Traceback (most recent call last)
<ipython-input-78-6b86c97bcabf> in <module>
14 print(output.squeeze().shape)
15 print(target.shape)
---> 16 loss=criterion(output.squeeze(),target.float())
17 loss.backward()
18 nn.utils.clip_grad_norm_(model.parameters(), 3)
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
~\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
930 def forward(self, input, target):
931 return F.cross_entropy(input, target, weight=self.weight,
--> 932 ignore_index=self.ignore_index, reduction=self.reduction)
933
934
~\Anaconda3\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2315 if size_average is not None or reduce is not None:
2316 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2317 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2318
2319
~\Anaconda3\lib\site-packages\torch\nn\functional.py in log_softmax(input, dim, _stacklevel, dtype)
1533 dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel)
1534 if dtype is None:
-> 1535 ret = input.log_softmax(dim)
1536 else:
1537 ret = input.log_softmax(dim, dtype=dtype)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#5554 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGTKXHIQAGW4VKQZRPBLOPTSWSULFANCNFSM4ETNTY2A>
.
|
Same error message for torch 1.8.0 I think the issue still exists after so many years. |
I think the issue is same. If you properly check the data set length with image set length. You will figure out the dimension error. Get it from kaggle or if you are using ms coco get if from official microsoft coco link. Sent from my Galaxy
-------- Original message --------From: Kael ***@***.***> Date: 13/03/2021 10:56 (GMT+01:00) To: pytorch/pytorch ***@***.***> Cc: DineshShrestha ***@***.***>, Comment ***@***.***> Subject: Re: [pytorch/pytorch] dimension out of range (expected to be in range of [-1, 0], but got 1) (#5554)
Just curious: Is this change pushed in the latest pyTorch version? I seem to be getting the same error message. Am on pyTorch 0.4.0
Same error message for torch 1.8.0
I think the issue still exists after so many years.
—You are receiving this because you commented.Reply to this email directly, view it on GitHub, or unsubscribe.
|
@kaelzhang @DineshShrestha please ask in the forums https://discuss.pytorch.org/ (or open an issue if you think this is a bug). It's hard for us to keep track of activity on closed issues especially if they've been closed for a while. |
@m416kar98k please open a new issue. |
output1.view(1, -1) try this |
If "outputs = model(ids, mask, token_type_ids).squeeze()" The error can be rectified by removing the squeeze function. [ outputs = model(ids, mask, token_type_ids) ] works fine. |
The output that I'm getting is =>
This is correct as far as the documentation is concern, but still getting following error
dimension out of range (expected to be in range of [-1, 0], but got 1)
The text was updated successfully, but these errors were encountered: