Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nChannels not according to paper #16

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

nChannels not according to paper #16

wants to merge 4 commits into from

Conversation

PabloRR100
Copy link

Hi @bamos,

According to the paper, the number of channels is twice the grow_rate only for the BC type DenseNets.
Since you are giving room to not have bottleneck, we should check that before giving nChannels. If not BC, channels = 16.

I also included one doubt on the forward method and a number of parameters sanity check that does not match with the BC reported by the authors.

Regards,
Pablo

Hi @bamos,

According to the paper, the number of channels is twice the grow_rate only for the BC type DenseNets.
Since you are giving room to not have bottleneck, we should check that before giving nChannels. If not BC, channels = 16.

I also included one doubt on the forward method and a number of parameters sanity check that does not match with the BC reported by the authors.

Regards,
Pablo
Solved the problems with the parameters on the DenseNets-BC
Copy link
Owner

@bamos bamos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I added a few comments inline to the PR.

densenet.py Outdated
#out = torch.squeeze(F.avg_pool2d(F.relu(self.bn1(out)), 8)) ## Why did you relu and bn again?
out = self.avgpool(8)
# out = F.log_softmax(self.fc(out))
out = self.fc(out) # You can define the softmax within the loss function right?
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so, this part shouldn't have been changed

densenet.py Outdated
out = torch.squeeze(F.avg_pool2d(F.relu(self.bn1(out)), 8))
out = F.log_softmax(self.fc(out))
#out = torch.squeeze(F.avg_pool2d(F.relu(self.bn1(out)), 8)) ## Why did you relu and bn again?
out = self.avgpool(8)
Copy link
Owner

@bamos bamos Aug 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, adding self.avgpool is a stylistic change. I prefer keeping the torch.functional verison. Also shouldn't out be input the pooling layer? How is this code working?

My point is that CNN models normally outputs the output of the fully connected layer.
Then this vector is passed to the Criterion(preds,targets) with the correct labels to compute the loss (CrossEntropy).

And yes the self.avgpool and the flatten are only to keep the style constant and more readable but the functionality is the same
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants