Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do you evaluate Chapter 3 Model? #52

Closed
lenoqt opened this issue Nov 30, 2020 · 3 comments
Closed

How do you evaluate Chapter 3 Model? #52

lenoqt opened this issue Nov 30, 2020 · 3 comments

Comments

@lenoqt
Copy link

lenoqt commented Nov 30, 2020

As title says, how do you evaluate a CNN? I tried use the same approach on chapter 2 but I can't. I get the following.

`---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
11 img = img_transforms(img).to(device)
12 cnnet.eval()
---> 13 prediction = F.softmax(cnnet(img), dim=1)
14 prediction = prediction.argmax()
15 cats_pred.append(labels[prediction])

D:\Users\gusta\anaconda3\envs\book-1\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),

in forward(self, x)
30
31 def forward(self, x):
---> 32 x = self.features(x)
33 x = self.avgpool(x)
34 x = torch.flatten(x, 1)

D:\Users\gusta\anaconda3\envs\book-1\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),

D:\Users\gusta\anaconda3\envs\book-1\lib\site-packages\torch\nn\modules\container.py in forward(self, input)
115 def forward(self, input):
116 for module in self:
--> 117 input = module(input)
118 return input
119

D:\Users\gusta\anaconda3\envs\book-1\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),

D:\Users\gusta\anaconda3\envs\book-1\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
421
422 def forward(self, input: Tensor) -> Tensor:
--> 423 return self._conv_forward(input, self.weight)
424
425 class Conv3d(_ConvNd):

D:\Users\gusta\anaconda3\envs\book-1\lib\site-packages\torch\nn\modules\conv.py in _conv_forward(self, input, weight)
417 weight, self.bias, self.stride,
418 _pair(0), self.dilation, self.groups)
--> 419 return F.conv2d(input, weight, self.bias, self.stride,
420 self.padding, self.dilation, self.groups)
421

RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 11, 11], but got 3-dimensional input of size [3, 64, 64] instead
`

@MarcusFra
Copy link
Contributor

MarcusFra commented Nov 30, 2020

This error is due to the wrong tensor size since it represents only a single image.
It needs to be converted to a tensor with a shape of [1, 3, 64, 64] as the model expects a DataLoader object which feeds the model with batches instead of single images. To do that, you can use the torch.unsqueeze() method:

labels = ['cat','fish']

img = Image.open(FILENAME)
img = img_transforms(img).to(device)
img = torch.unsqueeze(img, 0)
cnnnet.eval()
prediction = F.softmax(cnnnet(img), dim=1)
prediction = prediction.argmax()
print(labels[prediction])

in order to add the additional "batch" dimension at index 0.

If you want to predict multiple images you can use a test data loader (like the other data loaders) without that additional line (and like you implemented it before with the list you appended to; maybe you should also use an index or any information to which image your predictions refer to).

I've added the missing line to Chapter 2.ipynb --> #53

@MarcusFra
Copy link
Contributor

MarcusFra commented Nov 30, 2020

Just to quickly add the alternative using a test data data loader.
One solution for named predictions might be the usage of test_data_loader.dataset.samples:

batch_size = 64
test_data_path = "YOURPATH"
test_data = torchvision.datasets.ImageFolder(root=test_data_path,transform=img_transforms, is_valid_file=check_image)
test_data_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,shuffle=True)

labels = ['cat', 'fish']
preds = []
# get image names from attribute and convert to list
img_names = list(test_data_loader.dataset.samples)

cnnnet.eval()
for batch in test_data_loader:
    inputs, targets = batch
    inputs = inputs.to(device)
    output = cnnnet(inputs)
    prediction = F.softmax(output, dim=1)
    # get indices with max value; like argmax() but for tensor:
    prediction = torch.max(prediction, dim=1)
    # convert tensor (take only indices, not max values) to list:
    predictions = prediction[1].tolist()
    # convert indices to predicted string cat/fish
    predictions = [labels[ind] for ind in predictions]
    preds.append(predictions)

# convert nested lists to list
preds = [item for sublist in preds for item in sublist]
result = list(zip(preds, img_names))

@lenoqt
Copy link
Author

lenoqt commented Dec 1, 2020

Thanks! Seems to work now.

@lenoqt lenoqt closed this as completed Dec 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants