Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace some functions in the forward method of Inception3 with Module object calls #2287

Merged
merged 1 commit into from
Jun 5, 2020

Conversation

mitmul
Copy link
Contributor

@mitmul mitmul commented Jun 4, 2020

I tried to use torchvision.models._utils.IntermediateLayerGetter to extract features from the output of Mixed_7c block of Inception3 model, and realized that naively using this useful class for the model skips some operations such as F.max_pool2d, F.adaptive_avg_pool2d, and F.dropout which are applied in the forward method of the model class.

So, I think it's good to add those operations as nn.Module objects in the constructor, e.g., nn.MaxPool2d object for max pooling operation, etc., to enable IntermediateLayerGetter to perform desired forward computation with Inception3 model.

By this PR, how to extract feature maps from intermediate layer outputs of Inception3 will be changed as followings.

Before:

import torchvision.models as models
import torch
import torch.nn.functional as F

class Inception3_feature(torch.nn.Module):
    def __init__(self, net):
        super().__init__()
        self.net = net
    def forward(self, x):
        x = self.net.Conv2d_1a_3x3(x)
        x = self.net.Conv2d_2a_3x3(x)
        x = self.net.Conv2d_2b_3x3(x)
        x = F.max_pool2d(x, kernel_size=3, stride=2)
        x = self.net.Conv2d_3b_1x1(x)
        x = self.net.Conv2d_4a_3x3(x)
        x = F.max_pool2d(x, kernel_size=3, stride=2)
        x = self.net.Mixed_5b(x)
        x = self.net.Mixed_5c(x)
        x = self.net.Mixed_5d(x)
        x = self.net.Mixed_6a(x)
        x = self.net.Mixed_6b(x)
        x = self.net.Mixed_6c(x)
        x = self.net.Mixed_6d(x)
        x = self.net.Mixed_6e(x)
        x = self.net.Mixed_7a(x)
        x = self.net.Mixed_7b(x)
        x = self.net.Mixed_7c(x)
        return x

extractor = Inception3_feature(models.inception_v3(pretrained=True))
x = torch.rand((1, 3, 224, 224))

feature = extractor(x)

After:

import torchvision.models as models
import torch
from torchvision.models._utils import IntermediateLayerGetter

inception = models.inception_v3(pretrained=True, aux_logits=False)
extractor = IntermediateLayerGetter(inception, return_layers={'Mixed_7c': 'out'})
x = torch.rand((1, 3, 224, 224))

feature = extractor(x)

I would appreciate it if this PR could be acceptable.

Copy link
Member

@fmassa fmassa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot!

For information, the current situation to extract intermediate features in PyTorch is not perfect and IntermediateLayerGetter is fairly fragile, specially if the order of the modules registered is not the same as the order they are used in the forward pass.

One proposal that we had was to use torchscript to potentially give us a more robust solution, but it is still under discussion pytorch/pytorch#21064

@fmassa fmassa merged commit 3e06bc6 into pytorch:master Jun 5, 2020
facebook-github-bot pushed a commit that referenced this pull request Jul 8, 2020
#2423)

Summary:
… (#2287)

Pull Request resolved: #2423

Reviewed By: zhangguanheng66

Differential Revision: D22432629

Pulled By: fmassa

fbshipit-source-id: 609ad51d4913cb683a22afcaa61c0fe956af0a2d
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants