Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chapter 6: find_lr(), train() - RuntimeError Output size is too small #18

Closed
MarcusFra opened this issue Apr 20, 2020 · 2 comments
Closed

Comments

@MarcusFra
Copy link
Contributor

Hi,

after having inserted the missing train_loader argument in the find_lr() function (we should open a PR for this after having solved this error), I get a RuntimeError in the current version of Chapter6.ipynb.

After running

torch.save(audionet.state_dict(), "audionet.pth")
optimizer = optim.Adam(audionet.parameters(), lr=0.001)
### added: , train_loader, device="cuda"
logs,losses = find_lr(audionet, nn.CrossEntropyLoss(), optimizer, train_loader, device="cuda")
plt.plot(logs,losses)

I get this error:

---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

<ipython-input-47-d21b94029709> in <module>()
      2 optimizer = optim.Adam(audionet.parameters(), lr=0.001)
      3 ### added: , train_loader, device="cuda"
----> 4 logs,losses = find_lr(audionet, nn.CrossEntropyLoss(), optimizer, train_loader, device="cuda")
      5 plt.plot(logs,losses)

4 frames

<ipython-input-41-3824bd6072de> in find_lr(model, loss_fn, optimizer, train_loader, init_value, final_value, device)
     49         targets = targets.to(device)
     50         optimizer.zero_grad()
---> 51         outputs = model(inputs)
     52         loss = loss_fn(outputs, targets)
     53 

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

<ipython-input-44-db3925c72c5e> in forward(self, x)
     31         x = F.relu(self.bn4(x))
     32         x = self.pool4(x)
---> 33         x = self.avgPool(x)
     34         x = x.squeeze(-1)
     35         x = self.fc1(x)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/pooling.py in forward(self, input)
    484         return F.avg_pool1d(
    485             input, self.kernel_size, self.stride, self.padding, self.ceil_mode,
--> 486             self.count_include_pad)
    487 
    488 

RuntimeError: Given input size: (512x1x1). Calculated output size: (512x1x0). Output size is too small

The same happens when I run

train(audionet, optimizer, torch.nn.CrossEntropyLoss(),train_loader, valid_loader, epochs=20)

I ran the code in Colab.

@falloutdurham
Copy link
Owner

Hi - I have a little update on quantizing models hopefully coming at the weekend, so I'll take a look into this then as well (and upload the casper pic!).

@falloutdurham
Copy link
Owner

My email is ian@snappishproductions.com - I didn't realize it was being translated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants