Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

could you provide a detailed environment? #44

Closed
newExplore-hash opened this issue Sep 21, 2020 · 5 comments
Closed

could you provide a detailed environment? #44

newExplore-hash opened this issue Sep 21, 2020 · 5 comments

Comments

@newExplore-hash
Copy link

very appreciate your awesome work, but
according to your brief description of the installation requirements, i found that this program can't run at all, and various bugs will appear. just like the error of Segmentation fault (core dumped) caused by InPlaceABNSync. i use batchnorm instead of InPlaceABNSync, but i found the performance is very bad.

tks

@GoGoDuck912
Copy link
Owner

Hi @newExplore-hash ,

In brief, you don't need to install the latest InPlaceABNSync by yourself. The cuda files in ./modules are compiled by Pytorch automatically.

As your own environmental problems, please refer to https://github.com/mapillary/inplace_abn/tree/v0.1.1 for some needed packages.

@GoGoDuck912
Copy link
Owner

Moreover, if you only need inference, you can replace the InPlaceABNSync layer with one BatchNorm2D layer and one LeakyRelu layer in PyTorch.

@newExplore-hash
Copy link
Author

Moreover, if you only need inference, you can replace the InPlaceABNSync layer with one BatchNorm2D layer and one LeakyRelu layer in PyTorch.

yes, i just use this for inference, i know the role of InPlaceABNSync is to reduce the memory required for training deep networks. So i use nn.BatchNorm2d and nn.LeakyReLU instead of InPlaceABNSync in the script of AugmentCE2P.py for inference, but there is a problem as following:
Traceback (most recent call last):
File "simple_extractor.py", line 166, in
main()
File "simple_extractor.py", line 115, in main
model.load_state_dict(new_state_dict)
File "/root/miniconda3/envs/human_parsing/lib/python3.6/site-packages/torch/nn/modules/module.py", line 845, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ResNet:
Missing key(s) in state_dict: "decoder.conv3.4.weight", "decoder.conv3.4.bias", "decoder.conv3.4.running_mean", "decoder.conv3.4.running_var".
Unexpected key(s) in state_dict: "decoder.conv3.2.weight", "decoder.conv3.3.bias", "decoder.conv3.3.running_mean", "decoder.conv3.3.running_var".
size mismatch for decoder.conv3.3.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).

this error just happened in Decoder_Module for self.conv3, and there is normal for PSPModule and Edge_Module and other operations(self.conv1 and self.conv2) for Decoder_Module .

this is code:
class Decoder_Module(nn.Module):
"""
Parsing Branch Decoder Module.
"""

def __init__(self, num_classes):
    super(Decoder_Module, self).__init__()
    self.conv1 = nn.Sequential(
        nn.Conv2d(512, 256, kernel_size=1, padding=0, dilation=1, bias=False),
        InPlaceABNSync(256),
        #######
        nn.LeakyReLU()
    )
    self.conv2 = nn.Sequential(
        nn.Conv2d(256, 48, kernel_size=1, stride=1, padding=0, dilation=1, bias=False),
        InPlaceABNSync(48),
        ########
        nn.LeakyReLU()
    )
    self.conv3 = nn.Sequential(
        nn.Conv2d(304, 256, kernel_size=1, padding=0, dilation=1, bias=False),
        InPlaceABNSync(256),
        ########
        nn.LeakyReLU(),
        nn.Conv2d(256, 256, kernel_size=1, padding=0, dilation=1, bias=False),
        InPlaceABNSync(256),
        #########
        nn.LeakyReLU()
    )

    self.conv4 = nn.Conv2d(256, num_classes, kernel_size=1, padding=0, dilation=1, bias=True)

tks

@fuyawangye
Copy link

I modified "networks/AugmentCE2P.py" which replace C++ extended “InPlaceABNSync” for inferencing on cpu
successful!

# from ..modules import InPlaceABNSync

# BatchNorm2d = functools.partial(InPlaceABNSync, activation='none')
BatchNorm2d = nn.BatchNorm2d


class ReplacePlaceABNSync(BatchNorm2d):
    def __init__(self, *args, **kwargs):
        super(ReplacePlaceABNSync, self).__init__(*args, **kwargs)
        self.act = nn.LeakyReLU()

    def forward(self, input):
        output = super(ReplacePlaceABNSync, self).forward(input)
        output = self.act(output)
        return output

@fuyawangye
Copy link

@newExplore-hash
modifiy "networks/AugmentCE2P.py" as follow is better

# from ..modules import InPlaceABNSync

# BatchNorm2d = functools.partial(InPlaceABNSync, activation='none')
BatchNorm2d = nn.BatchNorm2d


class InPlaceABNSync(BatchNorm2d):
    def __init__(self, *args, **kwargs):
        super(InPlaceABNSync, self).__init__(*args, **kwargs)
        self.act = nn.LeakyReLU()

    def forward(self, input):
        output = super(InPlaceABNSync, self).forward(input)
        output = self.act(output)
        return output

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants